Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
fail 7077496 2022-10-23 03:32:39 2022-10-23 03:36:10 2022-10-23 04:05:24 0:29:14 0:19:43 0:09:31 smithi main ubuntu 20.04 rados/cephadm/upgrade/{1-start-distro/1-start-ubuntu_20.04-15.2.9 2-repo_digest/repo_digest 3-upgrade/simple 4-wait 5-upgrade-ls mon_election/connectivity} 2
Failure Reason:

Command failed on smithi050 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image docker.io/ceph/ceph:v15.2.9 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 95013a22-5285-11ed-8438-001a4aab830c -e sha1=d43ef73d3699233fe79a16a2a64561a856f3e0cd -- bash -c \'ceph versions | jq -e \'"\'"\'.overall | keys\'"\'"\' | grep $sha1\''

pass 7077497 2022-10-23 03:32:40 2022-10-23 03:36:11 2022-10-23 04:15:00 0:38:49 0:25:46 0:13:03 smithi main ubuntu 20.04 rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/connectivity msgr-failures/few objectstore/bluestore-comp-zlib rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{ubuntu_latest} thrashers/careful thrashosds-health workloads/ec-rados-plugin=jerasure-k=4-m=2} 3
pass 7077498 2022-10-23 03:32:41 2022-10-23 03:38:22 2022-10-23 04:16:28 0:38:06 0:29:53 0:08:13 smithi main rhel 8.4 rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/few objectstore/bluestore-comp-zlib rados recovery-overrides/{more-active-recovery} supported-random-distro$/{rhel_8} thrashers/careful thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} 2
fail 7077499 2022-10-23 03:32:43 2022-10-23 03:39:12 2022-10-23 03:57:10 0:17:58 0:07:07 0:10:51 smithi main ubuntu 20.04 rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 1-rook 2-workload/radosbench 3-final cluster/3-node k8s/1.21 net/calico rook/1.6.2} 3
Failure Reason:

Command failed on smithi062 with status 1: 'kubectl create -f rook/cluster/examples/kubernetes/ceph/crds.yaml -f rook/cluster/examples/kubernetes/ceph/common.yaml -f operator.yaml'

fail 7077500 2022-10-23 03:32:44 2022-10-23 03:39:23 2022-10-23 03:53:49 0:14:26 0:06:29 0:07:57 smithi main centos 8.stream rados/cephadm/smoke-roleless/{0-distro/centos_8.stream_container_tools 0-nvme-loop 1-start 2-services/nfs-ingress-rgw-bucket 3-final} 2
Failure Reason:

Command failed on smithi055 with status 5: 'sudo systemctl stop ceph-23e5665a-5286-11ed-8438-001a4aab830c@mon.smithi055'

pass 7077501 2022-10-23 03:32:45 2022-10-23 03:40:43 2022-10-23 06:04:23 2:23:40 2:15:46 0:07:54 smithi main centos 8.stream rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-zstd rados tasks/rados_cls_all validater/valgrind} 2
fail 7077502 2022-10-23 03:32:46 2022-10-23 03:40:54 2022-10-23 04:01:03 0:20:09 0:09:38 0:10:31 smithi main centos 8.stream rados/cephadm/thrash/{0-distro/centos_8.stream_container_tools 1-start 2-thrash 3-tasks/rados_api_tests fixed-2 msgr/async-v2only root} 2
Failure Reason:

Command failed on smithi027 with status 5: 'sudo systemctl stop ceph-097fd704-5287-11ed-8438-001a4aab830c@mon.a'

pass 7077503 2022-10-23 03:32:47 2022-10-23 03:43:14 2022-10-23 04:05:02 0:21:48 0:13:46 0:08:02 smithi main rhel 8.4 rados/singleton/{all/mon-config-key-caps mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{rhel_8}} 1
pass 7077504 2022-10-23 03:32:48 2022-10-23 03:43:35 2022-10-23 04:05:35 0:22:00 0:14:33 0:07:27 smithi main centos 8.stream rados/mgr/{clusters/{2-node-mgr} debug/mgr mgr_ttl_cache/disable mon_election/connectivity objectstore/bluestore-comp-lz4 supported-random-distro$/{centos_8} tasks/prometheus} 2
pass 7077505 2022-10-23 03:32:50 2022-10-23 03:44:35 2022-10-23 04:09:20 0:24:45 0:16:11 0:08:34 smithi main rhel 8.4 rados/objectstore/{backends/ceph_objectstore_tool supported-random-distro$/{rhel_8}} 1
fail 7077506 2022-10-23 03:32:51 2022-10-23 03:46:46 2022-10-23 04:03:50 0:17:04 0:10:26 0:06:38 smithi main rhel 8.4 rados/cephadm/with-work/{0-distro/rhel_8.4_container_tools_rhel8 fixed-2 mode/root mon_election/connectivity msgr/async-v2only start tasks/rados_api_tests} 2
Failure Reason:

Command failed on smithi106 with status 5: 'sudo systemctl stop ceph-7d3b5344-5287-11ed-8438-001a4aab830c@mon.a'

pass 7077507 2022-10-23 03:32:52 2022-10-23 03:46:46 2022-10-23 04:03:52 0:17:06 0:07:12 0:09:54 smithi main ubuntu 20.04 rados/singleton-nomsgr/{all/ceph-post-file mon_election/classic rados supported-random-distro$/{ubuntu_latest}} 1
fail 7077508 2022-10-23 03:32:53 2022-10-23 03:46:47 2022-10-23 04:18:08 0:31:21 0:19:57 0:11:24 smithi main centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{pg-warn whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-inline/yes 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

Command failed on smithi088 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image docker.io/ceph/ceph:v16.2.4 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid ecd6a370-5287-11ed-8438-001a4aab830c -e sha1=d43ef73d3699233fe79a16a2a64561a856f3e0cd -- bash -c \'ceph versions | jq -e \'"\'"\'.overall | keys\'"\'"\' | grep $sha1\''

pass 7077509 2022-10-23 03:32:54 2022-10-23 03:50:38 2022-10-23 04:34:32 0:43:54 0:31:06 0:12:48 smithi main ubuntu 20.04 rados/singleton-bluestore/{all/cephtool mon_election/connectivity msgr-failures/many msgr/async-v2only objectstore/bluestore-comp-snappy rados supported-random-distro$/{ubuntu_latest}} 1
fail 7077510 2022-10-23 03:32:56 2022-10-23 03:53:08 2022-10-23 04:10:12 0:17:04 0:06:02 0:11:02 smithi main ubuntu 18.04 rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/nautilus-v2only backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{ubuntu_18.04} mon_election/classic msgr-failures/fastclose rados thrashers/mapgap thrashosds-health workloads/rbd_cls} 3
Failure Reason:

Command failed on smithi052 with status 5: 'sudo systemctl stop ceph-50d7eea6-5288-11ed-8438-001a4aab830c@mon.a'

pass 7077511 2022-10-23 03:32:57 2022-10-23 03:56:59 2022-10-23 04:28:57 0:31:58 0:21:24 0:10:34 smithi main ubuntu 20.04 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/connectivity msgr-failures/fastclose msgr/async objectstore/bluestore-comp-lz4 rados supported-random-distro$/{ubuntu_latest} thrashers/morepggrow thrashosds-health workloads/cache-snaps-balanced} 2
pass 7077512 2022-10-23 03:32:58 2022-10-23 03:57:20 2022-10-23 04:12:11 0:14:51 0:07:47 0:07:04 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/classic task/test_cephadm_repos} 1
pass 7077513 2022-10-23 03:32:59 2022-10-23 03:57:20 2022-10-23 04:17:23 0:20:03 0:09:47 0:10:16 smithi main ubuntu 20.04 rados/perf/{ceph mon_election/connectivity objectstore/bluestore-bitmap openstack scheduler/dmclock_1Shard_16Threads settings/optimized ubuntu_latest workloads/fio_4M_rand_rw} 1
pass 7077514 2022-10-23 03:33:00 2022-10-23 03:58:01 2022-10-23 04:21:49 0:23:48 0:15:55 0:07:53 smithi main centos 8.stream rados/singleton/{all/mon-config-keys mon_election/classic msgr-failures/many msgr/async objectstore/bluestore-hybrid rados supported-random-distro$/{centos_8}} 1
fail 7077515 2022-10-23 03:33:02 2022-10-23 03:58:01 2022-10-23 04:13:20 0:15:19 0:04:54 0:10:25 smithi main ubuntu 20.04 rados/cephadm/osds/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-ops/rm-zap-flag} 2
Failure Reason:

Command failed on smithi008 with status 5: 'sudo systemctl stop ceph-c2f42b62-5288-11ed-8438-001a4aab830c@mon.smithi008'

fail 7077516 2022-10-23 03:33:03 2022-10-23 03:58:01 2022-10-23 04:16:20 0:18:19 0:07:46 0:10:33 smithi main rhel 8.4 rados/cephadm/smoke-roleless/{0-distro/rhel_8.4_container_tools_3.0 0-nvme-loop 1-start 2-services/nfs-ingress-rgw-user 3-final} 2
Failure Reason:

Command failed on smithi099 with status 5: 'sudo systemctl stop ceph-221762bc-5289-11ed-8438-001a4aab830c@mon.smithi099'

pass 7077517 2022-10-23 03:33:04 2022-10-23 04:01:02 2022-10-23 04:50:04 0:49:02 0:42:51 0:06:11 smithi main centos 8.stream rados/standalone/{supported-random-distro$/{centos_8} workloads/erasure-code} 1
pass 7077518 2022-10-23 03:33:05 2022-10-23 04:01:12 2022-10-23 04:38:48 0:37:36 0:28:08 0:09:28 smithi main rhel 8.4 rados/multimon/{clusters/21 mon_election/classic msgr-failures/many msgr/async-v2only no_pools objectstore/bluestore-stupid rados supported-random-distro$/{rhel_8} tasks/mon_recovery} 3
pass 7077519 2022-10-23 03:33:06 2022-10-23 04:03:53 2022-10-23 04:20:43 0:16:50 0:10:12 0:06:38 smithi main centos 8.stream rados/singleton-nomsgr/{all/export-after-evict mon_election/connectivity rados supported-random-distro$/{centos_8}} 1
pass 7077520 2022-10-23 03:33:07 2022-10-23 04:03:53 2022-10-23 04:27:24 0:23:31 0:14:12 0:09:19 smithi main centos 8.stream rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/connectivity msgr-failures/osd-delay objectstore/bluestore-comp-lz4 rados recovery-overrides/{more-active-recovery} supported-random-distro$/{centos_8} thrashers/careful thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} 4
fail 7077521 2022-10-23 03:33:09 2022-10-23 04:05:34 2022-10-23 04:18:48 0:13:14 0:07:29 0:05:45 smithi main rhel 8.4 rados/cephadm/smoke/{0-nvme-loop distro/rhel_8.4_container_tools_rhel8 fixed-2 mon_election/classic start} 2
Failure Reason:

Command failed on smithi045 with status 5: 'sudo systemctl stop ceph-b93179e4-5289-11ed-8438-001a4aab830c@mon.a'

pass 7077522 2022-10-23 03:33:10 2022-10-23 04:05:45 2022-10-23 04:43:18 0:37:33 0:25:37 0:11:56 smithi main ubuntu 20.04 rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/normal mon_election/connectivity msgr-failures/few objectstore/bluestore-comp-snappy rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{ubuntu_latest} thrashers/morepggrow thrashosds-health workloads/ec-small-objects-many-deletes} 2
pass 7077523 2022-10-23 03:33:11 2022-10-23 04:07:35 2022-10-23 04:46:13 0:38:38 0:32:12 0:06:26 smithi main rhel 8.4 rados/thrash-erasure-code-overwrites/{bluestore-bitmap ceph clusters/{fixed-2 openstack} fast/fast mon_election/classic msgr-failures/osd-dispatch-delay rados recovery-overrides/{default} supported-random-distro$/{rhel_8} thrashers/morepggrow thrashosds-health workloads/ec-snaps-few-objects-overwrites} 2
fail 7077524 2022-10-23 03:33:12 2022-10-23 04:07:36 2022-10-23 04:34:36 0:27:00 0:21:14 0:05:46 smithi main centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{pg-warn whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-inline/no 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

Command failed on smithi081 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image docker.io/ceph/ceph:v16.2.4 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 44e677aa-528a-11ed-8438-001a4aab830c -e sha1=d43ef73d3699233fe79a16a2a64561a856f3e0cd -- bash -c \'ceph versions | jq -e \'"\'"\'.overall | keys\'"\'"\' | grep $sha1\''

pass 7077525 2022-10-23 03:33:14 2022-10-23 04:07:36 2022-10-23 04:28:40 0:21:04 0:09:03 0:12:01 smithi main ubuntu 20.04 rados/singleton/{all/mon-config mon_election/connectivity msgr-failures/none msgr/async-v1only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{ubuntu_latest}} 1
fail 7077526 2022-10-23 03:33:15 2022-10-23 04:09:27 2022-10-23 04:24:51 0:15:24 0:09:08 0:06:16 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/connectivity task/test_nfs} 1
Failure Reason:

Command failed on smithi203 with status 5: 'sudo systemctl stop ceph-924c5bb8-528a-11ed-8438-001a4aab830c@mon.a'

pass 7077527 2022-10-23 03:33:16 2022-10-23 04:09:37 2022-10-23 04:33:15 0:23:38 0:17:05 0:06:33 smithi main centos 8.stream rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/many msgr/async-v1only objectstore/bluestore-comp-snappy rados supported-random-distro$/{centos_8} tasks/rados_cls_all} 2
pass 7077528 2022-10-23 03:33:17 2022-10-23 04:10:17 2022-10-23 04:40:13 0:29:56 0:21:43 0:08:13 smithi main centos 8.stream rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-active-recovery} 3-scrub-overrides/{default} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-comp-snappy rados supported-random-distro$/{centos_8} thrashers/none thrashosds-health workloads/cache-snaps} 2
fail 7077529 2022-10-23 03:33:18 2022-10-23 04:11:08 2022-10-23 04:23:49 0:12:41 0:07:30 0:05:11 smithi main rhel 8.4 rados/cephadm/smoke-roleless/{0-distro/rhel_8.4_container_tools_rhel8 0-nvme-loop 1-start 2-services/nfs-ingress 3-final} 2
Failure Reason:

Command failed on smithi052 with status 5: 'sudo systemctl stop ceph-6e82e1e8-528a-11ed-8438-001a4aab830c@mon.smithi052'

pass 7077530 2022-10-23 03:33:19 2022-10-23 04:11:08 2022-10-23 04:48:44 0:37:36 0:25:43 0:11:53 smithi main ubuntu 20.04 rados/monthrash/{ceph clusters/9-mons mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-comp-snappy rados supported-random-distro$/{ubuntu_latest} thrashers/sync workloads/rados_api_tests} 2
fail 7077531 2022-10-23 03:33:21 2022-10-23 04:13:29 2022-10-23 04:28:32 0:15:03 0:06:16 0:08:47 smithi main ubuntu 18.04 rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/nautilus backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{ubuntu_18.04} mon_election/connectivity msgr-failures/few rados thrashers/morepggrow thrashosds-health workloads/snaps-few-objects} 3
Failure Reason:

Command failed on smithi062 with status 5: 'sudo systemctl stop ceph-e88fb1dc-528a-11ed-8438-001a4aab830c@mon.a'

pass 7077532 2022-10-23 03:33:22 2022-10-23 04:15:00 2022-10-23 04:32:09 0:17:09 0:11:42 0:05:27 smithi main rhel 8.4 rados/singleton-nomsgr/{all/full-tiering mon_election/classic rados supported-random-distro$/{rhel_8}} 1
fail 7077533 2022-10-23 03:33:23 2022-10-23 04:15:10 2022-10-23 04:33:23 0:18:13 0:09:38 0:08:35 smithi main centos 8.stream rados/cephadm/thrash/{0-distro/centos_8.stream_container_tools 1-start 2-thrash 3-tasks/radosbench fixed-2 msgr/async root} 2
Failure Reason:

Command failed on smithi066 with status 5: 'sudo systemctl stop ceph-8d823994-528b-11ed-8438-001a4aab830c@mon.a'

fail 7077534 2022-10-23 03:33:24 2022-10-23 04:15:10 2022-10-23 04:29:41 0:14:31 0:06:35 0:07:56 smithi main ubuntu 18.04 rados/cephadm/with-work/{0-distro/ubuntu_18.04 fixed-2 mode/packaged mon_election/classic msgr/async start tasks/rados_python} 2
Failure Reason:

Command failed on smithi099 with status 5: 'sudo systemctl stop ceph-1ee6a36c-528b-11ed-8438-001a4aab830c@mon.a'

pass 7077535 2022-10-23 03:33:25 2022-10-23 04:16:21 2022-10-23 05:02:50 0:46:29 0:39:30 0:06:59 smithi main centos 8.stream rados/singleton/{all/osd-backfill mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{centos_8}} 1
fail 7077536 2022-10-23 03:33:27 2022-10-23 04:16:21 2022-10-23 04:27:57 0:11:36 0:04:55 0:06:41 smithi main ubuntu 18.04 rados/cephadm/smoke/{0-nvme-loop distro/ubuntu_18.04 fixed-2 mon_election/connectivity start} 2
Failure Reason:

Command failed on smithi130 with status 5: 'sudo systemctl stop ceph-eb29fb96-528a-11ed-8438-001a4aab830c@mon.a'

fail 7077537 2022-10-23 03:33:28 2022-10-23 04:16:32 2022-10-23 04:28:47 0:12:15 0:04:58 0:07:17 smithi main ubuntu 18.04 rados/cephadm/smoke-roleless/{0-distro/ubuntu_18.04 0-nvme-loop 1-start 2-services/nfs-ingress2 3-final} 2
Failure Reason:

Command failed on smithi035 with status 5: 'sudo systemctl stop ceph-0a9f9e9a-528b-11ed-8438-001a4aab830c@mon.smithi035'

pass 7077538 2022-10-23 03:33:29 2022-10-23 04:17:32 2022-10-23 04:39:55 0:22:23 0:11:07 0:11:16 smithi main ubuntu 20.04 rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/classic msgr-failures/osd-delay objectstore/bluestore-comp-zstd rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{ubuntu_latest} thrashers/default thrashosds-health workloads/ec-rados-plugin=lrc-k=4-m=2-l=3} 3
pass 7077539 2022-10-23 03:33:30 2022-10-23 04:18:13 2022-10-23 05:07:36 0:49:23 0:41:49 0:07:34 smithi main centos 8.stream rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/osd-delay objectstore/bluestore-comp-zstd rados recovery-overrides/{more-async-recovery} supported-random-distro$/{centos_8} thrashers/default thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} 2
pass 7077540 2022-10-23 03:33:31 2022-10-23 04:18:33 2022-10-23 04:40:14 0:21:41 0:14:51 0:06:50 smithi main centos 8.stream rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/connectivity msgr-failures/osd-delay msgr/async-v2only objectstore/bluestore-comp-zlib rados supported-random-distro$/{centos_8} thrashers/pggrow thrashosds-health workloads/cache} 2
pass 7077541 2022-10-23 03:33:33 2022-10-23 04:18:34 2022-10-23 04:40:17 0:21:43 0:14:10 0:07:33 smithi main centos 8.stream rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-hybrid rados tasks/mon_recovery validater/lockdep} 2
fail 7077542 2022-10-23 03:33:34 2022-10-23 04:18:34 2022-10-23 04:31:42 0:13:08 0:06:45 0:06:23 smithi main centos 8.stream rados/cephadm/osds/{0-distro/centos_8.stream_container_tools 0-nvme-loop 1-start 2-ops/rm-zap-wait} 2
Failure Reason:

Command failed on smithi045 with status 5: 'sudo systemctl stop ceph-784a03a4-528b-11ed-8438-001a4aab830c@mon.smithi045'

pass 7077543 2022-10-23 03:33:35 2022-10-23 04:18:55 2022-10-23 04:41:27 0:22:32 0:11:03 0:11:29 smithi main ubuntu 20.04 rados/perf/{ceph mon_election/classic objectstore/bluestore-comp openstack scheduler/dmclock_default_shards settings/optimized ubuntu_latest workloads/fio_4M_rand_write} 1
pass 7077544 2022-10-23 03:33:36 2022-10-23 04:19:45 2022-10-23 04:38:51 0:19:06 0:13:18 0:05:48 smithi main rhel 8.4 rados/singleton-nomsgr/{all/health-warnings mon_election/connectivity rados supported-random-distro$/{rhel_8}} 1
fail 7077545 2022-10-23 03:33:38 2022-10-23 04:19:45 2022-10-23 04:44:05 0:24:20 0:16:18 0:08:02 smithi main centos 8.stream rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.stream_container_tools 1-bootstrap/16.2.5 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
Failure Reason:

Command failed on smithi061 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image docker.io/ceph/ceph:v16.2.5 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid c6a49d5c-528b-11ed-8438-001a4aab830c -e sha1=d43ef73d3699233fe79a16a2a64561a856f3e0cd -- bash -c \'ceph versions | jq -e \'"\'"\'.overall | keys\'"\'"\' | grep $sha1\''

pass 7077546 2022-10-23 03:33:39 2022-10-23 04:19:46 2022-10-23 04:39:45 0:19:59 0:11:04 0:08:55 smithi main centos 8.stream rados/mgr/{clusters/{2-node-mgr} debug/mgr mgr_ttl_cache/enable mon_election/classic objectstore/bluestore-comp-snappy supported-random-distro$/{centos_8} tasks/workunits} 2
pass 7077547 2022-10-23 03:33:40 2022-10-23 04:21:56 2022-10-23 05:06:50 0:44:54 0:34:44 0:10:10 smithi main ubuntu 20.04 rados/singleton/{all/osd-recovery-incomplete mon_election/connectivity msgr-failures/many msgr/async objectstore/filestore-xfs rados supported-random-distro$/{ubuntu_latest}} 1
fail 7077548 2022-10-23 03:33:41 2022-10-23 04:23:57 2022-10-23 04:38:12 0:14:15 0:09:16 0:04:59 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/classic task/test_orch_cli} 1
Failure Reason:

Command failed on smithi052 with status 5: 'sudo systemctl stop ceph-754cd8ba-528c-11ed-8438-001a4aab830c@mon.a'

fail 7077549 2022-10-23 03:33:43 2022-10-23 04:23:57 2022-10-23 04:52:08 0:28:11 0:18:27 0:09:44 smithi main ubuntu 20.04 rados/cephadm/upgrade/{1-start-distro/1-start-ubuntu_20.04 2-repo_digest/defaut 3-upgrade/staggered 4-wait 5-upgrade-ls mon_election/classic} 2
Failure Reason:

Command failed on smithi018 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image docker.io/ceph/ceph:v15.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 7c3d9bd2-528c-11ed-8438-001a4aab830c -e sha1=d43ef73d3699233fe79a16a2a64561a856f3e0cd -- bash -c \'ceph orch daemon redeploy "mgr.$(ceph mgr dump -f json | jq .standbys | jq .[] | jq -r .name)"\''

pass 7077550 2022-10-23 03:33:44 2022-10-23 04:24:38 2022-10-23 04:42:20 0:17:42 0:07:54 0:09:48 smithi main ubuntu 20.04 rados/objectstore/{backends/filejournal supported-random-distro$/{ubuntu_latest}} 1
fail 7077551 2022-10-23 03:33:45 2022-10-23 04:24:58 2022-10-23 04:54:59 0:30:01 0:19:59 0:10:02 smithi main centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{pg-warn whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-inline/yes 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

Command failed on smithi103 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image docker.io/ceph/ceph:v16.2.4 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 19c4d988-528d-11ed-8438-001a4aab830c -e sha1=d43ef73d3699233fe79a16a2a64561a856f3e0cd -- bash -c \'ceph versions | jq -e \'"\'"\'.overall | keys\'"\'"\' | grep $sha1\''

fail 7077552 2022-10-23 03:33:46 2022-10-23 04:27:29 2022-10-23 04:41:17 0:13:48 0:06:37 0:07:11 smithi main ubuntu 18.04 rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/octopus backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{ubuntu_18.04} mon_election/classic msgr-failures/osd-delay rados thrashers/none thrashosds-health workloads/test_rbd_api} 3
Failure Reason:

Command failed on smithi050 with status 5: 'sudo systemctl stop ceph-bd65b6c6-528c-11ed-8438-001a4aab830c@mon.a'

pass 7077553 2022-10-23 03:33:47 2022-10-23 04:28:00 2022-10-23 05:07:40 0:39:40 0:32:58 0:06:42 smithi main rhel 8.4 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-active-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/classic msgr-failures/osd-dispatch-delay msgr/async objectstore/bluestore-comp-zstd rados supported-random-distro$/{rhel_8} thrashers/careful thrashosds-health workloads/pool-snaps-few-objects} 2
pass 7077554 2022-10-23 03:33:48 2022-10-23 04:28:40 2022-10-23 04:46:01 0:17:21 0:08:33 0:08:48 smithi main ubuntu 20.04 rados/singleton-nomsgr/{all/large-omap-object-warnings mon_election/classic rados supported-random-distro$/{ubuntu_latest}} 1
pass 7077555 2022-10-23 03:33:50 2022-10-23 04:28:41 2022-10-23 04:55:50 0:27:09 0:18:16 0:08:53 smithi main ubuntu 20.04 rados/singleton/{all/osd-recovery mon_election/classic msgr-failures/none msgr/async-v1only objectstore/bluestore-bitmap rados supported-random-distro$/{ubuntu_latest}} 1
fail 7077556 2022-10-23 03:33:51 2022-10-23 04:28:41 2022-10-23 04:44:03 0:15:22 0:05:02 0:10:20 smithi main ubuntu 20.04 rados/cephadm/smoke-roleless/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-services/nfs 3-final} 2
Failure Reason:

Command failed on smithi035 with status 5: 'sudo systemctl stop ceph-13b4850c-528d-11ed-8438-001a4aab830c@mon.smithi035'

pass 7077557 2022-10-23 03:33:52 2022-10-23 04:28:52 2022-10-23 04:52:32 0:23:40 0:15:53 0:07:47 smithi main centos 8.stream rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-zlib rados supported-random-distro$/{centos_8} tasks/rados_python} 2
fail 7077558 2022-10-23 03:33:53 2022-10-23 04:29:02 2022-10-23 04:45:09 0:16:07 0:04:59 0:11:08 smithi main ubuntu 20.04 rados/cephadm/smoke/{0-nvme-loop distro/ubuntu_20.04 fixed-2 mon_election/classic start} 2
Failure Reason:

Command failed on smithi099 with status 5: 'sudo systemctl stop ceph-3a06d39a-528d-11ed-8438-001a4aab830c@mon.a'

pass 7077559 2022-10-23 03:33:55 2022-10-23 04:29:43 2022-10-23 04:51:05 0:21:22 0:15:12 0:06:10 smithi main rhel 8.4 rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/classic msgr-failures/osd-dispatch-delay objectstore/bluestore-comp-snappy rados recovery-overrides/{default} supported-random-distro$/{rhel_8} thrashers/default thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} 4
pass 7077560 2022-10-23 03:33:56 2022-10-23 04:30:03 2022-10-23 04:48:01 0:17:58 0:10:18 0:07:40 smithi main centos 8.stream rados/multimon/{clusters/3 mon_election/connectivity msgr-failures/few msgr/async no_pools objectstore/filestore-xfs rados supported-random-distro$/{centos_8} tasks/mon_clock_no_skews} 2
pass 7077561 2022-10-23 03:33:57 2022-10-23 04:30:03 2022-10-23 05:02:56 0:32:53 0:22:36 0:10:17 smithi main ubuntu 20.04 rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/fast mon_election/classic msgr-failures/osd-delay objectstore/bluestore-comp-zlib rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{ubuntu_latest} thrashers/pggrow thrashosds-health workloads/ec-small-objects} 2
fail 7077562 2022-10-23 03:33:58 2022-10-23 04:31:44 2022-10-23 04:51:26 0:19:42 0:10:03 0:09:39 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/connectivity task/test_orch_cli_mon} 5
Failure Reason:

Command failed on smithi055 with status 5: 'sudo systemctl stop ceph-1f9db1e4-528e-11ed-8438-001a4aab830c@mon.a'

fail 7077563 2022-10-23 03:34:00 2022-10-23 04:33:25 2022-10-23 04:47:34 0:14:09 0:06:36 0:07:33 smithi main centos 8.stream rados/cephadm/smoke-roleless/{0-distro/centos_8.stream_container_tools 0-nvme-loop 1-start 2-services/nfs2 3-final} 2
Failure Reason:

Command failed on smithi192 with status 5: 'sudo systemctl stop ceph-a9664130-528d-11ed-8438-001a4aab830c@mon.smithi192'

fail 7077564 2022-10-23 03:34:01 2022-10-23 04:34:35 2022-10-23 04:51:52 0:17:17 0:09:36 0:07:41 smithi main centos 8.stream rados/cephadm/thrash/{0-distro/centos_8.stream_container_tools 1-start 2-thrash 3-tasks/small-objects fixed-2 msgr/async-v1only root} 2
Failure Reason:

Command failed on smithi043 with status 5: 'sudo systemctl stop ceph-214d6f98-528e-11ed-8438-001a4aab830c@mon.a'

pass 7077565 2022-10-23 03:34:02 2022-10-23 04:34:46 2022-10-23 04:53:22 0:18:36 0:12:32 0:06:04 smithi main rhel 8.4 rados/singleton/{all/peer mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{rhel_8}} 1
pass 7077566 2022-10-23 03:34:03 2022-10-23 04:34:46 2022-10-23 04:53:53 0:19:07 0:09:18 0:09:49 smithi main ubuntu 20.04 rados/singleton-nomsgr/{all/lazy_omap_stats_output mon_election/connectivity rados supported-random-distro$/{ubuntu_latest}} 1
fail 7077567 2022-10-23 03:34:05 2022-10-23 04:34:46 2022-10-23 04:52:21 0:17:35 0:06:41 0:10:54 smithi main ubuntu 20.04 rados/cephadm/with-work/{0-distro/ubuntu_20.04 fixed-2 mode/root mon_election/connectivity msgr/async-v1only start tasks/rados_api_tests} 2
Failure Reason:

Command failed on smithi081 with status 5: 'sudo systemctl stop ceph-33a0a30e-528e-11ed-8438-001a4aab830c@mon.a'

fail 7077568 2022-10-23 03:34:06 2022-10-23 04:34:47 2022-10-23 05:14:29 0:39:42 0:24:59 0:14:43 smithi main ubuntu 20.04 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/connectivity msgr-failures/fastclose msgr/async-v1only objectstore/bluestore-hybrid rados supported-random-distro$/{ubuntu_latest} thrashers/default thrashosds-health workloads/rados_api_tests} 2
Failure Reason:

"2022-10-23T05:00:39.825181+0000 osd.4 (osd.4) 42 : cluster [ERR] 186.6 3 tried to pull 186:60000000:.ceph-internal::hit_set_186.6_archive_2022-10-23 05%3a00%3a20.844079Z_2022-10-23 05%3a00%3a23.903746Z:head but got (2) No such file or directory" in cluster log

pass 7077569 2022-10-23 03:34:07 2022-10-23 04:38:58 2022-10-23 05:32:07 0:53:09 0:46:06 0:07:03 smithi main rhel 8.4 rados/monthrash/{ceph clusters/3-mons mon_election/connectivity msgr-failures/mon-delay msgr/async-v2only objectstore/bluestore-comp-zlib rados supported-random-distro$/{rhel_8} thrashers/force-sync-many workloads/rados_mon_osdmap_prune} 2
pass 7077570 2022-10-23 03:34:08 2022-10-23 04:38:58 2022-10-23 04:55:31 0:16:33 0:08:07 0:08:26 smithi main ubuntu 20.04 rados/perf/{ceph mon_election/connectivity objectstore/bluestore-low-osd-mem-target openstack scheduler/wpq_default_shards settings/optimized ubuntu_latest workloads/radosbench_4K_rand_read} 1
fail 7077571 2022-10-23 03:34:10 2022-10-23 04:38:58 2022-10-23 04:53:48 0:14:50 0:06:15 0:08:35 smithi main ubuntu 18.04 rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/luminous-v1only backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{ubuntu_18.04} mon_election/connectivity msgr-failures/osd-delay rados thrashers/careful thrashosds-health workloads/cache-snaps} 3
Failure Reason:

Command failed on smithi088 with status 5: 'sudo systemctl stop ceph-6fac1e32-528e-11ed-8438-001a4aab830c@mon.a'

fail 7077572 2022-10-23 03:34:11 2022-10-23 04:39:59 2022-10-23 04:56:24 0:16:25 0:07:56 0:08:29 smithi main rhel 8.4 rados/cephadm/osds/{0-distro/rhel_8.4_container_tools_3.0 0-nvme-loop 1-start 2-ops/rmdir-reactivate} 2
Failure Reason:

Command failed on smithi071 with status 5: 'sudo systemctl stop ceph-c02c1e0c-528e-11ed-8438-001a4aab830c@mon.smithi071'

fail 7077573 2022-10-23 03:34:12 2022-10-23 04:39:59 2022-10-23 05:10:04 0:30:05 0:21:29 0:08:36 smithi main centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{pg-warn whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-inline/no 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

Command failed on smithi012 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image docker.io/ceph/ceph:v16.2.4 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 02c09540-528f-11ed-8438-001a4aab830c -e sha1=d43ef73d3699233fe79a16a2a64561a856f3e0cd -- bash -c \'ceph versions | jq -e \'"\'"\'.overall | keys\'"\'"\' | grep $sha1\''

pass 7077574 2022-10-23 03:34:13 2022-10-23 04:40:20 2022-10-23 05:02:09 0:21:49 0:13:56 0:07:53 smithi main centos 8.stream rados/standalone/{supported-random-distro$/{centos_8} workloads/mgr} 1
pass 7077575 2022-10-23 03:34:14 2022-10-23 04:40:20 2022-10-23 05:03:25 0:23:05 0:16:28 0:06:37 smithi main centos 8.stream rados/singleton/{all/pg-autoscaler-progress-off mon_election/classic msgr-failures/many msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{centos_8}} 2
fail 7077576 2022-10-23 03:34:15 2022-10-23 04:40:21 2022-10-23 04:54:19 0:13:58 0:06:49 0:07:09 smithi main centos 8.stream rados/cephadm/smoke/{0-nvme-loop distro/centos_8.stream_container_tools fixed-2 mon_election/connectivity start} 2
Failure Reason:

Command failed on smithi130 with status 5: 'sudo systemctl stop ceph-a26e1622-528e-11ed-8438-001a4aab830c@mon.a'

pass 7077577 2022-10-23 03:34:17 2022-10-23 04:41:21 2022-10-23 05:20:15 0:38:54 0:32:35 0:06:19 smithi main centos 8.stream rados/thrash-erasure-code-overwrites/{bluestore-bitmap ceph clusters/{fixed-2 openstack} fast/normal mon_election/connectivity msgr-failures/fastclose rados recovery-overrides/{more-async-recovery} supported-random-distro$/{centos_8} thrashers/pggrow thrashosds-health workloads/ec-pool-snaps-few-objects-overwrites} 2
pass 7077578 2022-10-23 03:34:18 2022-10-23 04:41:22 2022-10-23 05:03:12 0:21:50 0:12:25 0:09:25 smithi main ubuntu 20.04 rados/singleton-nomsgr/{all/librados_hello_world mon_election/classic rados supported-random-distro$/{ubuntu_latest}} 1
pass 7077579 2022-10-23 03:34:19 2022-10-23 04:41:32 2022-10-23 05:22:47 0:41:15 0:32:01 0:09:14 smithi main rhel 8.4 rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/connectivity msgr-failures/osd-dispatch-delay objectstore/bluestore-hybrid rados recovery-overrides/{more-async-recovery} supported-random-distro$/{rhel_8} thrashers/fastread thrashosds-health workloads/ec-rados-plugin=jerasure-k=4-m=2} 3
pass 7077580 2022-10-23 03:34:20 2022-10-23 04:43:23 2022-10-23 05:19:21 0:35:58 0:27:51 0:08:07 smithi main rhel 8.4 rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/osd-dispatch-delay objectstore/bluestore-hybrid rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{rhel_8} thrashers/mapgap thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} 2
fail 7077581 2022-10-23 03:34:22 2022-10-23 04:44:13 2022-10-23 04:58:50 0:14:37 0:07:47 0:06:50 smithi main rhel 8.4 rados/cephadm/smoke-singlehost/{0-distro$/{rhel_8.4_container_tools_3.0} 1-start 2-services/rgw 3-final} 1
Failure Reason:

Command failed on smithi197 with status 5: 'sudo systemctl stop ceph-13aa1750-528f-11ed-8438-001a4aab830c@mon.smithi197'

pass 7077582 2022-10-23 03:34:23 2022-10-23 04:44:14 2022-10-23 05:46:15 1:02:01 0:55:19 0:06:42 smithi main centos 8.stream rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async-v1only objectstore/bluestore-low-osd-mem-target rados tasks/rados_api_tests validater/valgrind} 2
pass 7077583 2022-10-23 03:34:24 2022-10-23 04:45:14 2022-10-23 05:01:29 0:16:15 0:11:28 0:04:47 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/connectivity task/test_adoption} 1
pass 7077584 2022-10-23 03:34:25 2022-10-23 04:45:15 2022-10-23 05:51:16 1:06:01 0:56:19 0:09:42 smithi main ubuntu 20.04 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-recovery} 3-scrub-overrides/{default} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{ubuntu_latest} thrashers/mapgap thrashosds-health workloads/radosbench-high-concurrency} 2
pass 7077585 2022-10-23 03:34:26 2022-10-23 04:46:15 2022-10-23 05:10:30 0:24:15 0:16:34 0:07:41 smithi main rhel 8.4 rados/mgr/{clusters/{2-node-mgr} debug/mgr mgr_ttl_cache/disable mon_election/connectivity objectstore/bluestore-comp-zlib supported-random-distro$/{rhel_8} tasks/crash} 2
fail 7077586 2022-10-23 03:34:28 2022-10-23 04:47:36 2022-10-23 05:02:51 0:15:15 0:08:04 0:07:11 smithi main rhel 8.4 rados/cephadm/smoke-roleless/{0-distro/rhel_8.4_container_tools_3.0 0-nvme-loop 1-start 2-services/rgw-ingress 3-final} 2
Failure Reason:

Command failed on smithi186 with status 5: 'sudo systemctl stop ceph-ac7d48c6-528f-11ed-8438-001a4aab830c@mon.smithi186'

pass 7077587 2022-10-23 03:34:29 2022-10-23 04:47:36 2022-10-23 05:11:19 0:23:43 0:16:18 0:07:25 smithi main centos 8.stream rados/singleton/{all/pg-autoscaler mon_election/connectivity msgr-failures/none msgr/async-v1only objectstore/bluestore-comp-zlib rados supported-random-distro$/{centos_8}} 1
fail 7077588 2022-10-23 03:34:30 2022-10-23 04:47:37 2022-10-23 05:04:52 0:17:15 0:10:18 0:06:57 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/classic task/test_cephadm} 1
Failure Reason:

Command failed (workunit test cephadm/test_cephadm.sh) on smithi159 with status 125: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=031d56cfae658907a3f24cb5740764fd798d7d2c TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_cephadm.sh'

pass 7077589 2022-10-23 03:34:31 2022-10-23 04:47:37 2022-10-23 05:14:07 0:26:30 0:17:39 0:08:51 smithi main centos 8.stream rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/many msgr/async objectstore/bluestore-comp-zstd rados supported-random-distro$/{centos_8} tasks/rados_stress_watch} 2
fail 7077590 2022-10-23 03:34:32 2022-10-23 04:48:07 2022-10-23 05:03:54 0:15:47 0:08:22 0:07:25 smithi main rhel 8.4 rados/cephadm/smoke-roleless/{0-distro/rhel_8.4_container_tools_rhel8 0-nvme-loop 1-start 2-services/rgw 3-final} 2
Failure Reason:

Command failed on smithi008 with status 5: 'sudo systemctl stop ceph-ddd88624-528f-11ed-8438-001a4aab830c@mon.smithi008'

pass 7077591 2022-10-23 03:34:34 2022-10-23 04:48:48 2022-10-23 05:13:09 0:24:21 0:15:57 0:08:24 smithi main centos 8.stream rados/singleton-nomsgr/{all/msgr mon_election/connectivity rados supported-random-distro$/{centos_8}} 1
fail 7077592 2022-10-23 03:34:35 2022-10-23 04:50:08 2022-10-23 05:04:28 0:14:20 0:06:21 0:07:59 smithi main ubuntu 18.04 rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/luminous backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{ubuntu_18.04} mon_election/classic msgr-failures/fastclose rados thrashers/default thrashosds-health workloads/radosbench} 3
Failure Reason:

Command failed on smithi047 with status 5: 'sudo systemctl stop ceph-f0b855b2-528f-11ed-8438-001a4aab830c@mon.a'

fail 7077593 2022-10-23 03:34:36 2022-10-23 04:51:09 2022-10-23 05:06:27 0:15:18 0:09:03 0:06:15 smithi main rhel 8.4 rados/cephadm/smoke/{0-nvme-loop distro/rhel_8.4_container_tools_3.0 fixed-2 mon_election/classic start} 2
Failure Reason:

Command failed on smithi102 with status 5: 'sudo systemctl stop ceph-4f378360-5290-11ed-8438-001a4aab830c@mon.a'

pass 7077594 2022-10-23 03:34:37 2022-10-23 04:51:30 2022-10-23 07:18:32 2:27:02 2:19:45 0:07:17 smithi main centos 8.stream rados/objectstore/{backends/filestore-idempotent-aio-journal supported-random-distro$/{centos_8}} 1
pass 7077595 2022-10-23 03:34:39 2022-10-23 04:51:30 2022-10-23 05:23:36 0:32:06 0:25:10 0:06:56 smithi main centos 8.stream rados/valgrind-leaks/{1-start 2-inject-leak/none centos_latest} 1
fail 7077596 2022-10-23 03:34:40 2022-10-23 04:51:30 2022-10-23 05:16:22 0:24:52 0:17:40 0:07:12 smithi main centos 8.stream rados/cephadm/upgrade/{1-start-distro/1-start-centos_8.stream_container-tools 2-repo_digest/repo_digest 3-upgrade/simple 4-wait 5-upgrade-ls mon_election/connectivity} 2
Failure Reason:

Command failed on smithi046 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image docker.io/ceph/ceph:v15.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 179e076c-5290-11ed-8438-001a4aab830c -e sha1=d43ef73d3699233fe79a16a2a64561a856f3e0cd -- bash -c \'ceph versions | jq -e \'"\'"\'.overall | keys\'"\'"\' | grep $sha1\''

pass 7077597 2022-10-23 03:34:41 2022-10-23 04:51:31 2022-10-23 05:10:53 0:19:22 0:13:07 0:06:15 smithi main rhel 8.4 rados/singleton/{all/pg-removal-interruption mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{rhel_8}} 1
pass 7077598 2022-10-23 03:34:42 2022-10-23 04:52:01 2022-10-23 06:35:49 1:43:48 1:33:55 0:09:53 smithi main ubuntu 20.04 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/connectivity msgr-failures/osd-delay msgr/async objectstore/bluestore-stupid rados supported-random-distro$/{ubuntu_latest} thrashers/morepggrow thrashosds-health workloads/radosbench} 2
pass 7077599 2022-10-23 03:34:43 2022-10-23 04:52:12 2022-10-23 05:09:06 0:16:54 0:08:59 0:07:55 smithi main ubuntu 20.04 rados/perf/{ceph mon_election/classic objectstore/bluestore-stupid openstack scheduler/dmclock_1Shard_16Threads settings/optimized ubuntu_latest workloads/radosbench_4K_seq_read} 1
pass 7077600 2022-10-23 03:34:45 2022-10-23 04:52:12 2022-10-23 05:16:12 0:24:00 0:16:23 0:07:37 smithi main rhel 8.4 rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/connectivity msgr-failures/fastclose objectstore/bluestore-comp-zlib rados recovery-overrides/{more-async-recovery} supported-random-distro$/{rhel_8} thrashers/careful thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} 4
fail 7077601 2022-10-23 03:34:46 2022-10-23 04:52:43 2022-10-23 05:10:45 0:18:02 0:10:25 0:07:37 smithi main centos 8.stream rados/cephadm/thrash/{0-distro/centos_8.stream_container_tools 1-start 2-thrash 3-tasks/snaps-few-objects fixed-2 msgr/async-v2only root} 2
Failure Reason:

Command failed on smithi093 with status 5: 'sudo systemctl stop ceph-e0ff316c-5290-11ed-8438-001a4aab830c@mon.a'

pass 7077602 2022-10-23 03:34:47 2022-10-23 04:53:53 2022-10-23 05:13:18 0:19:25 0:12:24 0:07:01 smithi main rhel 8.4 rados/multimon/{clusters/6 mon_election/classic msgr-failures/many msgr/async-v1only no_pools objectstore/bluestore-bitmap rados supported-random-distro$/{rhel_8} tasks/mon_clock_with_skews} 2
pass 7077603 2022-10-23 03:34:48 2022-10-23 04:53:54 2022-10-23 05:35:27 0:41:33 0:33:55 0:07:38 smithi main centos 8.stream rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/normal mon_election/connectivity msgr-failures/osd-dispatch-delay objectstore/bluestore-comp-zstd rados recovery-overrides/{more-active-recovery} supported-random-distro$/{centos_8} thrashers/careful thrashosds-health workloads/ec-rados-plugin=clay-k=4-m=2} 2
fail 7077604 2022-10-23 03:34:50 2022-10-23 04:54:24 2022-10-23 05:12:17 0:17:53 0:10:21 0:07:32 smithi main centos 8.stream rados/cephadm/with-work/{0-distro/centos_8.stream_container_tools fixed-2 mode/packaged mon_election/classic msgr/async-v2only start tasks/rados_python} 2
Failure Reason:

Command failed on smithi103 with status 5: 'sudo systemctl stop ceph-15769476-5291-11ed-8438-001a4aab830c@mon.a'

pass 7077605 2022-10-23 03:34:51 2022-10-23 04:55:05 2022-10-23 05:26:55 0:31:50 0:21:08 0:10:42 smithi main ubuntu 20.04 rados/singleton-nomsgr/{all/multi-backfill-reject mon_election/classic rados supported-random-distro$/{ubuntu_latest}} 2
fail 7077606 2022-10-23 03:34:52 2022-10-23 04:55:35 2022-10-23 05:24:53 0:29:18 0:22:19 0:06:59 smithi main centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{pg-warn whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-inline/yes 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

Command failed on smithi057 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image docker.io/ceph/ceph:v16.2.4 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 2a37e8c4-5291-11ed-8438-001a4aab830c -e sha1=d43ef73d3699233fe79a16a2a64561a856f3e0cd -- bash -c \'ceph versions | jq -e \'"\'"\'.overall | keys\'"\'"\' | grep $sha1\''

fail 7077607 2022-10-23 03:34:53 2022-10-23 04:55:36 2022-10-23 05:10:37 0:15:01 0:08:45 0:06:16 smithi main rhel 8.4 rados/cephadm/osds/{0-distro/rhel_8.4_container_tools_3.0 0-nvme-loop 1-start 2-ops/repave-all} 2
Failure Reason:

Command failed on smithi053 with status 5: 'sudo systemctl stop ceph-e2b1bf52-5290-11ed-8438-001a4aab830c@mon.smithi053'

pass 7077608 2022-10-23 03:34:54 2022-10-23 04:55:36 2022-10-23 05:37:19 0:41:43 0:32:21 0:09:22 smithi main ubuntu 20.04 rados/singleton-bluestore/{all/cephtool mon_election/classic msgr-failures/many msgr/async objectstore/bluestore-bitmap rados supported-random-distro$/{ubuntu_latest}} 1
pass 7077609 2022-10-23 03:34:56 2022-10-23 04:55:57 2022-10-23 05:21:23 0:25:26 0:17:50 0:07:36 smithi main rhel 8.4 rados/singleton/{all/radostool mon_election/connectivity msgr-failures/many msgr/async objectstore/bluestore-hybrid rados supported-random-distro$/{rhel_8}} 1
fail 7077610 2022-10-23 03:34:57 2022-10-23 04:56:07 2022-10-23 05:09:33 0:13:26 0:05:38 0:07:48 smithi main ubuntu 18.04 rados/cephadm/smoke-roleless/{0-distro/ubuntu_18.04 0-nvme-loop 1-start 2-services/basic 3-final} 2
Failure Reason:

Command failed on smithi074 with status 5: 'sudo systemctl stop ceph-8cc6301e-5290-11ed-8438-001a4aab830c@mon.smithi074'

pass 7077611 2022-10-23 03:34:58 2022-10-23 04:56:08 2022-10-23 05:36:10 0:40:02 0:33:23 0:06:39 smithi main rhel 8.4 rados/monthrash/{ceph clusters/9-mons mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-comp-zstd rados supported-random-distro$/{rhel_8} thrashers/many workloads/rados_mon_workunits} 2
fail 7077612 2022-10-23 03:34:59 2022-10-23 04:56:28 2022-10-23 05:25:53 0:29:25 0:20:57 0:08:28 smithi main centos 8.stream rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.stream_container_tools 1-bootstrap/octopus 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
Failure Reason:

Command failed on smithi036 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image docker.io/ceph/ceph:v15 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 243bda70-5291-11ed-8438-001a4aab830c -e sha1=d43ef73d3699233fe79a16a2a64561a856f3e0cd -- bash -c \'ceph versions | jq -e \'"\'"\'.overall | keys\'"\'"\' | grep $sha1\''

pass 7077613 2022-10-23 03:35:00 2022-10-23 04:58:59 2022-10-23 05:22:11 0:23:12 0:13:35 0:09:37 smithi main ubuntu 20.04 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{default} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/classic msgr-failures/osd-dispatch-delay msgr/async-v1only objectstore/filestore-xfs rados supported-random-distro$/{ubuntu_latest} thrashers/none thrashosds-health workloads/redirect} 2
pass 7077614 2022-10-23 03:35:02 2022-10-23 04:58:59 2022-10-23 05:16:06 0:17:07 0:08:16 0:08:51 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/connectivity task/test_cephadm_repos} 1
pass 7077615 2022-10-23 03:35:03 2022-10-23 05:01:30 2022-10-23 05:29:51 0:28:21 0:21:44 0:06:37 smithi main rhel 8.4 rados/singleton-nomsgr/{all/osd_stale_reads mon_election/connectivity rados supported-random-distro$/{rhel_8}} 1
fail 7077616 2022-10-23 03:35:04 2022-10-23 05:02:11 2022-10-23 05:18:03 0:15:52 0:07:42 0:08:10 smithi main rhel 8.4 rados/cephadm/smoke/{0-nvme-loop distro/rhel_8.4_container_tools_rhel8 fixed-2 mon_election/connectivity start} 2
Failure Reason:

Command failed on smithi045 with status 5: 'sudo systemctl stop ceph-c12d6f4c-5291-11ed-8438-001a4aab830c@mon.a'

pass 7077617 2022-10-23 03:35:05 2022-10-23 05:03:01 2022-10-23 05:32:11 0:29:10 0:22:09 0:07:01 smithi main centos 8.stream rados/singleton/{all/random-eio mon_election/classic msgr-failures/none msgr/async-v1only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{centos_8}} 2
pass 7077618 2022-10-23 03:35:06 2022-10-23 05:03:02 2022-10-23 05:22:58 0:19:56 0:09:04 0:10:52 smithi main ubuntu 20.04 rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-hybrid rados supported-random-distro$/{ubuntu_latest} tasks/rados_striper} 2
fail 7077619 2022-10-23 03:35:08 2022-10-23 05:03:12 2022-10-23 05:17:17 0:14:05 0:06:11 0:07:54 smithi main ubuntu 18.04 rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/mimic-v1only backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{ubuntu_18.04} mon_election/connectivity msgr-failures/few rados thrashers/mapgap thrashosds-health workloads/rbd_cls} 3
Failure Reason:

Command failed on smithi008 with status 5: 'sudo systemctl stop ceph-b4ed0a9e-5291-11ed-8438-001a4aab830c@mon.a'

pass 7077620 2022-10-23 03:35:09 2022-10-23 05:04:03 2022-10-23 05:25:56 0:21:53 0:10:58 0:10:55 smithi main ubuntu 20.04 rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/classic msgr-failures/fastclose objectstore/bluestore-low-osd-mem-target rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{ubuntu_latest} thrashers/mapgap thrashosds-health workloads/ec-rados-plugin=lrc-k=4-m=2-l=3} 3
pass 7077621 2022-10-23 03:35:10 2022-10-23 05:04:34 2022-10-23 05:42:04 0:37:30 0:30:16 0:07:14 smithi main rhel 8.4 rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/fastclose objectstore/bluestore-low-osd-mem-target rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{rhel_8} thrashers/morepggrow thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} 2
fail 7077622 2022-10-23 03:35:11 2022-10-23 05:04:54 2022-10-23 05:21:51 0:16:57 0:05:20 0:11:37 smithi main ubuntu 20.04 rados/cephadm/smoke-roleless/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-services/client-keyring 3-final} 2
Failure Reason:

Command failed on smithi102 with status 5: 'sudo systemctl stop ceph-6374cf98-5292-11ed-8438-001a4aab830c@mon.smithi102'

pass 7077623 2022-10-23 03:35:12 2022-10-23 05:06:35 2022-10-23 05:34:47 0:28:12 0:19:39 0:08:33 smithi main centos 8.stream rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-stupid rados tasks/rados_cls_all validater/lockdep} 2
fail 7077624 2022-10-23 03:35:14 2022-10-23 05:07:46 2022-10-23 05:34:53 0:27:07 0:20:33 0:06:34 smithi main centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{pg-warn whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-inline/no 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

Command failed on smithi112 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image docker.io/ceph/ceph:v16.2.4 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid bb43c634-5292-11ed-8438-001a4aab830c -e sha1=d43ef73d3699233fe79a16a2a64561a856f3e0cd -- bash -c \'ceph versions | jq -e \'"\'"\'.overall | keys\'"\'"\' | grep $sha1\''

pass 7077625 2022-10-23 03:35:15 2022-10-23 05:07:46 2022-10-23 05:32:05 0:24:19 0:14:00 0:10:19 smithi main ubuntu 20.04 rados/mgr/{clusters/{2-node-mgr} debug/mgr mgr_ttl_cache/enable mon_election/classic objectstore/bluestore-comp-zstd supported-random-distro$/{ubuntu_latest} tasks/failover} 2
pass 7077626 2022-10-23 03:35:16 2022-10-23 05:09:07 2022-10-23 05:30:29 0:21:22 0:15:21 0:06:01 smithi main centos 8.stream rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} 3-scrub-overrides/{default} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/connectivity msgr-failures/fastclose msgr/async-v2only objectstore/bluestore-bitmap rados supported-random-distro$/{centos_8} thrashers/pggrow thrashosds-health workloads/redirect_promote_tests} 2
fail 7077627 2022-10-23 03:35:17 2022-10-23 05:09:37 2022-10-23 05:26:21 0:16:44 0:09:40 0:07:04 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/classic task/test_nfs} 1
Failure Reason:

Command failed on smithi044 with status 5: 'sudo systemctl stop ceph-f5435660-5292-11ed-8438-001a4aab830c@mon.a'

pass 7077628 2022-10-23 03:35:19 2022-10-23 05:10:08 2022-10-23 05:28:24 0:18:16 0:08:39 0:09:37 smithi main ubuntu 20.04 rados/perf/{ceph mon_election/connectivity objectstore/bluestore-basic-min-osd-mem-target openstack scheduler/dmclock_default_shards settings/optimized ubuntu_latest workloads/radosbench_4M_rand_read} 1
pass 7077629 2022-10-23 03:35:20 2022-10-23 05:10:08 2022-10-23 05:27:49 0:17:41 0:08:49 0:08:52 smithi main ubuntu 20.04 rados/singleton-nomsgr/{all/pool-access mon_election/classic rados supported-random-distro$/{ubuntu_latest}} 1
fail 7077630 2022-10-23 03:35:21 2022-10-23 05:10:39 2022-10-23 05:23:27 0:12:48 0:06:44 0:06:04 smithi main ubuntu 18.04 rados/rook/smoke/{0-distro/ubuntu_18.04 0-kubeadm 1-rook 2-workload/none 3-final cluster/1-node k8s/1.21 net/calico rook/master} 1
Failure Reason:

[Errno 2] Cannot find file on the remote 'ubuntu@smithi053.front.sepia.ceph.com': 'rook/cluster/examples/kubernetes/ceph/operator.yaml'

pass 7077631 2022-10-23 03:35:22 2022-10-23 05:10:39 2022-10-23 05:35:40 0:25:01 0:18:03 0:06:58 smithi main centos 8.stream rados/singleton/{all/rebuild-mondb mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{centos_8}} 1
fail 7077632 2022-10-23 03:35:24 2022-10-23 05:10:40 2022-10-23 05:23:37 0:12:57 0:07:04 0:05:53 smithi main centos 8.stream rados/cephadm/smoke-roleless/{0-distro/centos_8.stream_container_tools 0-nvme-loop 1-start 2-services/iscsi 3-final} 2
Failure Reason:

Command failed on smithi093 with status 5: 'sudo systemctl stop ceph-c44b5efe-5292-11ed-8438-001a4aab830c@mon.smithi093'

fail 7077633 2022-10-23 03:35:25 2022-10-23 05:10:50 2022-10-23 05:28:16 0:17:26 0:09:49 0:07:37 smithi main centos 8.stream rados/cephadm/thrash/{0-distro/centos_8.stream_container_tools 1-start 2-thrash 3-tasks/rados_api_tests fixed-2 msgr/async root} 2
Failure Reason:

Command failed on smithi120 with status 5: 'sudo systemctl stop ceph-3f6c299c-5293-11ed-8438-001a4aab830c@mon.a'

fail 7077634 2022-10-23 03:35:26 2022-10-23 05:11:00 2022-10-23 05:30:06 0:19:06 0:11:37 0:07:29 smithi main rhel 8.4 rados/cephadm/with-work/{0-distro/rhel_8.4_container_tools_3.0 fixed-2 mode/root mon_election/connectivity msgr/async start tasks/rados_api_tests} 2
Failure Reason:

Command failed on smithi079 with status 5: 'sudo systemctl stop ceph-6c647b2a-5293-11ed-8438-001a4aab830c@mon.a'

pass 7077635 2022-10-23 03:35:27 2022-10-23 05:11:01 2022-10-23 05:50:03 0:39:02 0:31:34 0:07:28 smithi main centos 8.stream rados/standalone/{supported-random-distro$/{centos_8} workloads/misc} 1
pass 7077636 2022-10-23 03:35:29 2022-10-23 05:11:01 2022-10-23 05:45:39 0:34:38 0:26:00 0:08:38 smithi main rhel 8.4 rados/thrash-erasure-code-overwrites/{bluestore-bitmap ceph clusters/{fixed-2 openstack} fast/fast mon_election/classic msgr-failures/few rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{rhel_8} thrashers/careful thrashosds-health workloads/ec-small-objects-fast-read-overwrites} 2
fail 7077637 2022-10-23 03:35:30 2022-10-23 05:12:22 2022-10-23 05:24:59 0:12:37 0:05:07 0:07:30 smithi main ubuntu 18.04 rados/cephadm/smoke/{0-nvme-loop distro/ubuntu_18.04 fixed-2 mon_election/classic start} 2
Failure Reason:

Command failed on smithi138 with status 5: 'sudo systemctl stop ceph-eabcca50-5292-11ed-8438-001a4aab830c@mon.a'

fail 7077638 2022-10-23 03:35:31 2022-10-23 05:13:12 2022-10-23 05:28:45 0:15:33 0:07:50 0:07:43 smithi main rhel 8.4 rados/cephadm/osds/{0-distro/rhel_8.4_container_tools_rhel8 0-nvme-loop 1-start 2-ops/rm-zap-add} 2
Failure Reason:

Command failed on smithi033 with status 5: 'sudo systemctl stop ceph-445ddff4-5293-11ed-8438-001a4aab830c@mon.smithi033'

pass 7077639 2022-10-23 03:35:32 2022-10-23 05:13:23 2022-10-23 05:36:31 0:23:08 0:14:17 0:08:51 smithi main centos 8.stream rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/classic msgr-failures/few objectstore/bluestore-comp-zstd rados recovery-overrides/{more-active-recovery} supported-random-distro$/{centos_8} thrashers/default thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} 4
pass 7077640 2022-10-23 03:35:34 2022-10-23 05:14:33 2022-10-23 06:25:34 1:11:01 1:03:44 0:07:17 smithi main rhel 8.4 rados/singleton/{all/recovery-preemption mon_election/classic msgr-failures/many msgr/async objectstore/filestore-xfs rados supported-random-distro$/{rhel_8}} 1
pass 7077641 2022-10-23 03:35:35 2022-10-23 05:16:14 2022-10-23 06:07:48 0:51:34 0:45:04 0:06:30 smithi main centos 8.stream rados/singleton-nomsgr/{all/recovery-unfound-found mon_election/connectivity rados supported-random-distro$/{centos_8}} 1
pass 7077642 2022-10-23 03:35:36 2022-10-23 05:16:15 2022-10-23 05:39:14 0:22:59 0:16:30 0:06:29 smithi main centos 8.stream rados/multimon/{clusters/9 mon_election/connectivity msgr-failures/few msgr/async-v2only no_pools objectstore/bluestore-comp-lz4 rados supported-random-distro$/{centos_8} tasks/mon_recovery} 3
pass 7077643 2022-10-23 03:35:37 2022-10-23 05:16:15 2022-10-23 05:38:57 0:22:42 0:17:14 0:05:28 smithi main rhel 8.4 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-comp-lz4 rados supported-random-distro$/{rhel_8} thrashers/careful thrashosds-health workloads/redirect_set_object} 2
pass 7077644 2022-10-23 03:35:38 2022-10-23 05:16:26 2022-10-23 07:34:17 2:17:51 2:10:05 0:07:46 smithi main centos 8.stream rados/objectstore/{backends/filestore-idempotent supported-random-distro$/{centos_8}} 1
fail 7077645 2022-10-23 03:35:40 2022-10-23 05:17:26 2022-10-23 05:31:16 0:13:50 0:06:26 0:07:24 smithi main ubuntu 18.04 rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/mimic backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{ubuntu_18.04} mon_election/classic msgr-failures/osd-delay rados thrashers/morepggrow thrashosds-health workloads/snaps-few-objects} 3
Failure Reason:

Command failed on smithi045 with status 5: 'sudo systemctl stop ceph-b243189a-5293-11ed-8438-001a4aab830c@mon.a'

pass 7077646 2022-10-23 03:35:41 2022-10-23 05:18:07 2022-10-23 05:54:04 0:35:57 0:29:14 0:06:43 smithi main centos 8.stream rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/fast mon_election/classic msgr-failures/fastclose objectstore/bluestore-hybrid rados recovery-overrides/{more-async-recovery} supported-random-distro$/{centos_8} thrashers/default thrashosds-health workloads/ec-rados-plugin=jerasure-k=2-m=1} 2
fail 7077647 2022-10-23 03:35:42 2022-10-23 05:18:57 2022-10-23 05:35:58 0:17:01 0:09:21 0:07:40 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/connectivity task/test_orch_cli} 1
Failure Reason:

Command failed on smithi080 with status 5: 'sudo systemctl stop ceph-41cc5ff8-5294-11ed-8438-001a4aab830c@mon.a'

fail 7077648 2022-10-23 03:35:43 2022-10-23 05:18:58 2022-10-23 05:35:06 0:16:08 0:08:28 0:07:40 smithi main rhel 8.4 rados/cephadm/smoke-roleless/{0-distro/rhel_8.4_container_tools_3.0 0-nvme-loop 1-start 2-services/mirror 3-final} 2
Failure Reason:

Command failed on smithi003 with status 5: 'sudo systemctl stop ceph-3be441dc-5294-11ed-8438-001a4aab830c@mon.smithi003'

fail 7077649 2022-10-23 03:35:44 2022-10-23 05:18:58 2022-10-23 05:53:06 0:34:08 0:23:17 0:10:51 smithi main ubuntu 20.04 rados/cephadm/upgrade/{1-start-distro/1-start-ubuntu_20.04-15.2.9 2-repo_digest/defaut 3-upgrade/staggered 4-wait 5-upgrade-ls mon_election/classic} 2
Failure Reason:

Command failed on smithi061 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image docker.io/ceph/ceph:v15.2.9 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 38c25840-5294-11ed-8438-001a4aab830c -e sha1=d43ef73d3699233fe79a16a2a64561a856f3e0cd -- bash -c \'ceph versions | jq -e \'"\'"\'.mgr | length == 2\'"\'"\'\''

pass 7077650 2022-10-23 03:35:46 2022-10-23 05:19:29 2022-10-23 05:39:09 0:19:40 0:12:44 0:06:56 smithi main rhel 8.4 rados/singleton/{all/resolve_stuck_peering mon_election/connectivity msgr-failures/none msgr/async-v1only objectstore/bluestore-bitmap rados supported-random-distro$/{rhel_8}} 2
pass 7077651 2022-10-23 03:35:47 2022-10-23 05:20:19 2022-10-23 06:00:37 0:40:18 0:32:01 0:08:17 smithi main centos 8.stream rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/many msgr/async-v2only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{centos_8} tasks/rados_workunit_loadgen_big} 2
fail 7077652 2022-10-23 03:35:48 2022-10-23 05:21:30 2022-10-23 05:50:54 0:29:24 0:21:28 0:07:56 smithi main centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{pg-warn whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-inline/yes 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

Command failed on smithi102 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image docker.io/ceph/ceph:v16.2.4 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid ad719818-5294-11ed-8438-001a4aab830c -e sha1=d43ef73d3699233fe79a16a2a64561a856f3e0cd -- bash -c \'ceph versions | jq -e \'"\'"\'.overall | keys\'"\'"\' | grep $sha1\''

pass 7077653 2022-10-23 03:35:49 2022-10-23 05:22:01 2022-10-23 06:05:24 0:43:23 0:36:49 0:06:34 smithi main centos 8.stream rados/monthrash/{ceph clusters/3-mons mon_election/connectivity msgr-failures/mon-delay msgr/async-v1only objectstore/bluestore-hybrid rados supported-random-distro$/{centos_8} thrashers/one workloads/snaps-few-objects} 2
pass 7077654 2022-10-23 03:35:51 2022-10-23 05:22:21 2022-10-23 05:44:32 0:22:11 0:12:27 0:09:44 smithi main ubuntu 20.04 rados/singleton-nomsgr/{all/version-number-sanity mon_election/classic rados supported-random-distro$/{ubuntu_latest}} 1
fail 7077655 2022-10-23 03:35:52 2022-10-23 05:22:51 2022-10-23 05:40:33 0:17:42 0:05:22 0:12:20 smithi main ubuntu 20.04 rados/cephadm/smoke/{0-nvme-loop distro/ubuntu_20.04 fixed-2 mon_election/connectivity start} 2
Failure Reason:

Command failed on smithi153 with status 5: 'sudo systemctl stop ceph-e4c44f4a-5294-11ed-8438-001a4aab830c@mon.a'

pass 7077656 2022-10-23 03:35:53 2022-10-23 05:22:52 2022-10-23 05:50:49 0:27:57 0:19:59 0:07:58 smithi main rhel 8.4 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-recovery} 3-scrub-overrides/{default} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/connectivity msgr-failures/osd-delay msgr/async-v1only objectstore/bluestore-comp-snappy rados supported-random-distro$/{rhel_8} thrashers/default thrashosds-health workloads/set-chunks-read} 2
pass 7077657 2022-10-23 03:35:54 2022-10-23 05:23:02 2022-10-23 05:40:17 0:17:15 0:08:22 0:08:53 smithi main ubuntu 20.04 rados/perf/{ceph mon_election/classic objectstore/bluestore-bitmap openstack scheduler/wpq_default_shards settings/optimized ubuntu_latest workloads/radosbench_4M_seq_read} 1
fail 7077658 2022-10-23 03:35:56 2022-10-23 05:23:33 2022-10-23 05:42:24 0:18:51 0:10:33 0:08:18 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/classic task/test_orch_cli_mon} 5
Failure Reason:

Command failed on smithi057 with status 5: 'sudo systemctl stop ceph-4f1eb290-5295-11ed-8438-001a4aab830c@mon.a'

fail 7077659 2022-10-23 03:35:57 2022-10-23 05:25:04 2022-10-23 05:41:21 0:16:17 0:07:52 0:08:25 smithi main rhel 8.4 rados/cephadm/smoke-roleless/{0-distro/rhel_8.4_container_tools_rhel8 0-nvme-loop 1-start 2-services/nfs-ingress-rgw-bucket 3-final} 2
Failure Reason:

Command failed on smithi066 with status 5: 'sudo systemctl stop ceph-07c5c9e2-5295-11ed-8438-001a4aab830c@mon.smithi066'

pass 7077660 2022-10-23 03:35:58 2022-10-23 05:25:04 2022-10-23 05:44:47 0:19:43 0:12:13 0:07:30 smithi main centos 8.stream rados/singleton/{all/test-crash mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{centos_8}} 1
fail 7077661 2022-10-23 03:35:59 2022-10-23 05:25:55 2022-10-23 05:39:31 0:13:36 0:06:49 0:06:47 smithi main ubuntu 18.04 rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/nautilus-v1only backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{ubuntu_18.04} mon_election/connectivity msgr-failures/fastclose rados thrashers/none thrashosds-health workloads/test_rbd_api} 3
Failure Reason:

Command failed on smithi047 with status 5: 'sudo systemctl stop ceph-e6912096-5294-11ed-8438-001a4aab830c@mon.a'

pass 7077662 2022-10-23 03:36:00 2022-10-23 05:26:05 2022-10-23 06:04:15 0:38:10 0:28:41 0:09:29 smithi main ubuntu 20.04 rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/connectivity msgr-failures/few objectstore/bluestore-stupid rados recovery-overrides/{default} supported-random-distro$/{ubuntu_latest} thrashers/morepggrow thrashosds-health workloads/ec-rados-plugin=jerasure-k=4-m=2} 3
pass 7077663 2022-10-23 03:36:02 2022-10-23 08:53:29 2022-10-23 09:29:07 0:35:38 0:27:14 0:08:24 smithi main rhel 8.4 rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/few objectstore/bluestore-stupid rados recovery-overrides/{more-active-recovery} supported-random-distro$/{rhel_8} thrashers/none thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} 2
pass 7077664 2022-10-23 03:36:03 2022-10-23 08:55:32 2022-10-23 09:35:55 0:40:23 0:33:02 0:07:21 smithi main centos 8.stream rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async objectstore/filestore-xfs rados tasks/mon_recovery validater/valgrind} 2
fail 7077665 2022-10-23 03:36:04 2022-10-23 08:55:42 2022-10-23 09:12:52 0:17:10 0:10:01 0:07:09 smithi main centos 8.stream rados/cephadm/thrash/{0-distro/centos_8.stream_container_tools 1-start 2-thrash 3-tasks/radosbench fixed-2 msgr/async-v1only root} 2
Failure Reason:

Command failed on smithi083 with status 5: 'sudo systemctl stop ceph-a5aed6b8-52b2-11ed-8438-001a4aab830c@mon.a'

fail 7077666 2022-10-23 03:36:05 2022-10-23 08:55:43 2022-10-23 09:11:36 0:15:53 0:05:19 0:10:34 smithi main ubuntu 20.04 rados/dashboard/{centos_8.stream_container_tools clusters/{2-node-mgr} debug/mgr mon_election/connectivity random-objectstore$/{bluestore-comp-zlib} supported-random-distro$/{ubuntu_latest} tasks/dashboard} 2
Failure Reason:

Command failed on smithi046 with status 1: 'TESTDIR=/home/ubuntu/cephtest bash -s'

pass 7077667 2022-10-23 03:36:07 2022-10-23 08:56:13 2022-10-23 09:25:01 0:28:48 0:23:16 0:05:32 smithi main rhel 8.4 rados/singleton-nomsgr/{all/admin_socket_output mon_election/connectivity rados supported-random-distro$/{rhel_8}} 1
pass 7077668 2022-10-23 03:36:08 2022-10-23 08:56:14 2022-10-23 12:04:29 3:08:15 3:00:54 0:07:21 smithi main ubuntu 18.04 rados/upgrade/nautilus-x-singleton/{0-cluster/{openstack start} 1-install/nautilus 2-partial-upgrade/firsthalf 3-thrash/default 4-workload/{rbd-cls rbd-import-export readwrite snaps-few-objects} 5-workload/{radosbench rbd_api} 6-finish-upgrade 7-pacific 8-workload/{rbd-python snaps-many-objects} bluestore-bitmap mon_election/connectivity thrashosds-health ubuntu_18.04} 4
fail 7077669 2022-10-23 03:36:09 2022-10-23 08:56:25 2022-10-23 09:13:42 0:17:17 0:11:32 0:05:45 smithi main rhel 8.4 rados/cephadm/with-work/{0-distro/rhel_8.4_container_tools_rhel8 fixed-2 mode/packaged mon_election/classic msgr/async-v1only start tasks/rados_python} 2
Failure Reason:

Command failed on smithi063 with status 5: 'sudo systemctl stop ceph-ee49065a-52b2-11ed-8438-001a4aab830c@mon.a'

pass 7077670 2022-10-23 03:36:10 2022-10-23 08:56:35 2022-10-23 09:22:58 0:26:23 0:17:58 0:08:25 smithi main centos 8.stream rados/mgr/{clusters/{2-node-mgr} debug/mgr mgr_ttl_cache/disable mon_election/connectivity objectstore/bluestore-hybrid supported-random-distro$/{centos_8} tasks/insights} 2
fail 7077671 2022-10-23 03:36:12 2022-10-23 08:56:56 2022-10-23 09:08:50 0:11:54 0:05:38 0:06:16 smithi main ubuntu 18.04 rados/cephadm/osds/{0-distro/ubuntu_18.04 0-nvme-loop 1-start 2-ops/rm-zap-flag} 2
Failure Reason:

Command failed on smithi053 with status 5: 'sudo systemctl stop ceph-435530d4-52b2-11ed-8438-001a4aab830c@mon.smithi053'

pass 7077672 2022-10-23 03:36:13 2022-10-23 08:57:26 2022-10-23 09:33:20 0:35:54 0:23:33 0:12:21 smithi main ubuntu 20.04 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/classic msgr-failures/osd-dispatch-delay msgr/async-v2only objectstore/bluestore-comp-zlib rados supported-random-distro$/{ubuntu_latest} thrashers/mapgap thrashosds-health workloads/small-objects-balanced} 2
fail 7077673 2022-10-23 03:36:14 2022-10-23 08:59:47 2022-10-23 09:12:18 0:12:31 0:05:09 0:07:22 smithi main ubuntu 18.04 rados/cephadm/smoke-roleless/{0-distro/ubuntu_18.04 0-nvme-loop 1-start 2-services/nfs-ingress-rgw-user 3-final} 2
Failure Reason:

Command failed on smithi040 with status 5: 'sudo systemctl stop ceph-ad3088f0-52b2-11ed-8438-001a4aab830c@mon.smithi040'

pass 7077674 2022-10-23 03:36:15 2022-10-23 09:00:37 2022-10-23 09:19:45 0:19:08 0:08:38 0:10:30 smithi main ubuntu 20.04 rados/singleton/{all/test-noautoscale-flag mon_election/connectivity msgr-failures/many msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{ubuntu_latest}} 1
fail 7077675 2022-10-23 03:36:17 2022-10-23 09:02:08 2022-10-23 09:19:33 0:17:25 0:10:07 0:07:18 smithi main centos 8.stream rados/cephadm/dashboard/{0-distro/centos_8.stream_container_tools task/test_e2e} 2
Failure Reason:

Command failed on smithi057 with status 5: 'sudo systemctl stop ceph-98fb4b12-52b3-11ed-8438-001a4aab830c@mon.a'

fail 7077676 2022-10-23 03:36:18 2022-10-23 09:02:08 2022-10-23 09:31:44 0:29:36 0:21:47 0:07:49 smithi main centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{pg-warn whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-inline/no 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

Command failed on smithi035 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image docker.io/ceph/ceph:v16.2.4 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 966a206c-52b3-11ed-8438-001a4aab830c -e sha1=d43ef73d3699233fe79a16a2a64561a856f3e0cd -- bash -c \'ceph versions | jq -e \'"\'"\'.overall | keys\'"\'"\' | grep $sha1\''

pass 7077677 2022-10-23 03:36:19 2022-10-23 09:02:39 2022-10-23 09:21:45 0:19:06 0:13:32 0:05:34 smithi main rhel 8.4 rados/singleton-nomsgr/{all/balancer mon_election/classic rados supported-random-distro$/{rhel_8}} 1
fail 7077678 2022-10-23 03:36:20 2022-10-23 09:02:40 2022-10-23 09:31:58 0:29:18 0:21:30 0:07:48 smithi main centos 8.stream rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.stream_container_tools 1-bootstrap/16.2.4 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
Failure Reason:

Command failed on smithi142 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image docker.io/ceph/ceph:v16.2.4 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 327aa004-52b3-11ed-8438-001a4aab830c -e sha1=d43ef73d3699233fe79a16a2a64561a856f3e0cd -- bash -c \'ceph versions | jq -e \'"\'"\'.overall | keys\'"\'"\' | grep $sha1\''

pass 7077679 2022-10-23 03:36:22 2022-10-23 09:02:50 2022-10-23 09:27:38 0:24:48 0:12:53 0:11:55 smithi main ubuntu 20.04 rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/connectivity msgr-failures/osd-delay objectstore/bluestore-hybrid rados recovery-overrides/{more-active-recovery} supported-random-distro$/{ubuntu_latest} thrashers/careful thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} 4
pass 7077680 2022-10-23 03:36:23 2022-10-23 09:04:11 2022-10-23 09:27:46 0:23:35 0:14:14 0:09:21 smithi main ubuntu 20.04 rados/cephadm/orchestrator_cli/{0-random-distro$/{ubuntu_20.04} 2-node-mgr orchestrator_cli} 2
pass 7077681 2022-10-23 03:36:24 2022-10-23 09:04:12 2022-10-23 09:30:32 0:26:20 0:16:08 0:10:12 smithi main ubuntu 20.04 rados/singleton/{all/test_envlibrados_for_rocksdb mon_election/classic msgr-failures/none msgr/async-v1only objectstore/bluestore-comp-zlib rados supported-random-distro$/{ubuntu_latest}} 1
fail 7077682 2022-10-23 03:36:25 2022-10-23 09:04:12 2022-10-23 09:24:20 0:20:08 0:11:31 0:08:37 smithi main centos 8.stream rados/multimon/{clusters/21 mon_election/classic msgr-failures/many msgr/async no_pools objectstore/bluestore-comp-snappy rados supported-random-distro$/{centos_8} tasks/mon_clock_no_skews} 3
Failure Reason:

"2022-10-23T09:21:35.549846+0000 mon.a (mon.0) 21 : cluster [WRN] Health check failed: 2/21 mons down, quorum a,b,c,d,e,f,g,h,j,k,l,m,n,p,q,r,s,t,u (MON_DOWN)" in cluster log

pass 7077683 2022-10-23 03:36:26 2022-10-23 09:05:23 2022-10-23 09:34:24 0:29:01 0:23:11 0:05:50 smithi main rhel 8.4 rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-stupid rados supported-random-distro$/{rhel_8} tasks/rados_workunit_loadgen_mix} 2
pass 7077684 2022-10-23 03:36:28 2022-10-23 09:37:03 1468 smithi main centos 8.stream rados/cephadm/rbd_iscsi/{base/install cluster/{fixed-3 openstack} pool/datapool supported-random-distro$/{centos_8} workloads/ceph_iscsi} 3
pass 7077685 2022-10-23 03:36:29 2022-10-23 09:05:34 2022-10-23 09:42:53 0:37:19 0:29:48 0:07:31 smithi main rhel 8.4 rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/normal mon_election/connectivity msgr-failures/few objectstore/bluestore-low-osd-mem-target rados recovery-overrides/{more-active-recovery} supported-random-distro$/{rhel_8} thrashers/fastread thrashosds-health workloads/ec-rados-plugin=jerasure-k=3-m=1} 2
pass 7077686 2022-10-23 03:36:30 2022-10-23 09:05:35 2022-10-23 09:38:16 0:32:41 0:26:10 0:06:31 smithi main centos 8.stream rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-active-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/connectivity msgr-failures/fastclose msgr/async objectstore/bluestore-comp-zstd rados supported-random-distro$/{centos_8} thrashers/morepggrow thrashosds-health workloads/small-objects-localized} 2
pass 7077687 2022-10-23 03:36:31 2022-10-23 09:05:36 2022-10-23 09:26:56 0:21:20 0:08:47 0:12:33 smithi main ubuntu 20.04 rados/perf/{ceph mon_election/connectivity objectstore/bluestore-comp openstack scheduler/dmclock_1Shard_16Threads settings/optimized ubuntu_latest workloads/radosbench_4M_write} 1
fail 7077688 2022-10-23 03:36:32 2022-10-23 09:07:27 2022-10-23 09:21:32 0:14:05 0:07:05 0:07:00 smithi main centos 8.stream rados/cephadm/smoke/{0-nvme-loop distro/centos_8.stream_container_tools fixed-2 mon_election/classic start} 2
Failure Reason:

Command failed on smithi081 with status 5: 'sudo systemctl stop ceph-011ca7a4-52b4-11ed-8438-001a4aab830c@mon.a'

pass 7077689 2022-10-23 03:36:34 2022-10-23 09:08:27 2022-10-23 09:25:51 0:17:24 0:07:29 0:09:55 smithi main ubuntu 20.04 rados/objectstore/{backends/fusestore supported-random-distro$/{ubuntu_latest}} 1
fail 7077690 2022-10-23 03:36:35 2022-10-23 09:08:28 2022-10-23 09:21:49 0:13:21 0:06:47 0:06:34 smithi main centos 8.stream rados/cephadm/smoke-singlehost/{0-distro$/{centos_8.stream_container_tools} 1-start 2-services/basic 3-final} 1
Failure Reason:

Command failed on smithi116 with status 5: 'sudo systemctl stop ceph-01e75256-52b4-11ed-8438-001a4aab830c@mon.smithi116'

pass 7077691 2022-10-23 03:36:36 2022-10-23 09:08:58 2022-10-23 09:31:06 0:22:08 0:17:22 0:04:46 smithi main rhel 8.4 rados/singleton-nomsgr/{all/cache-fs-trunc mon_election/connectivity rados supported-random-distro$/{rhel_8}} 1
pass 7077692 2022-10-23 03:36:37 2022-10-23 09:08:59 2022-10-23 09:26:44 0:17:45 0:11:21 0:06:24 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/classic task/test_adoption} 1
pass 7077693 2022-10-23 03:36:38 2022-10-23 09:09:29 2022-10-23 10:36:38 1:27:09 1:16:32 0:10:37 smithi main ubuntu 20.04 rados/singleton/{all/thrash-backfill-full mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{ubuntu_latest}} 2
fail 7077694 2022-10-23 03:36:40 2022-10-23 09:11:40 2022-10-23 09:25:51 0:14:11 0:06:23 0:07:48 smithi main ubuntu 18.04 rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/nautilus-v2only backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{ubuntu_18.04} mon_election/classic msgr-failures/few rados thrashers/pggrow thrashosds-health workloads/cache-snaps} 3
Failure Reason:

Command failed on smithi040 with status 5: 'sudo systemctl stop ceph-76487148-52b4-11ed-8438-001a4aab830c@mon.a'

pass 7077695 2022-10-23 03:36:41 2022-10-23 09:12:21 2022-10-23 09:38:29 0:26:08 0:15:04 0:11:04 smithi main ubuntu 20.04 rados/monthrash/{ceph clusters/9-mons mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{ubuntu_latest} thrashers/sync-many workloads/pool-create-delete} 2
pass 7077696 2022-10-23 03:36:42 2022-10-23 09:13:01 2022-10-23 10:04:42 0:51:41 0:44:17 0:07:24 smithi main centos 8.stream rados/standalone/{supported-random-distro$/{centos_8} workloads/mon} 1
pass 7077697 2022-10-23 03:36:43 2022-10-23 09:13:52 2022-10-23 09:50:16 0:36:24 0:25:58 0:10:26 smithi main ubuntu 20.04 rados/thrash-erasure-code-overwrites/{bluestore-bitmap ceph clusters/{fixed-2 openstack} fast/normal mon_election/connectivity msgr-failures/osd-delay rados recovery-overrides/{default} supported-random-distro$/{ubuntu_latest} thrashers/default thrashosds-health workloads/ec-small-objects-overwrites} 2
fail 7077698 2022-10-23 03:36:44 2022-10-23 09:14:02 2022-10-23 09:30:14 0:16:12 0:05:01 0:11:11 smithi main ubuntu 20.04 rados/cephadm/smoke-roleless/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-services/nfs-ingress 3-final} 2
Failure Reason:

Command failed on smithi063 with status 5: 'sudo systemctl stop ceph-1890c734-52b5-11ed-8438-001a4aab830c@mon.smithi063'

fail 7077699 2022-10-23 03:36:46 2022-10-23 09:14:03 2022-10-23 09:32:01 0:17:58 0:09:53 0:08:05 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/connectivity task/test_cephadm} 1
Failure Reason:

Command failed (workunit test cephadm/test_cephadm.sh) on smithi155 with status 125: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=031d56cfae658907a3f24cb5740764fd798d7d2c TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_cephadm.sh'

pass 7077700 2022-10-23 03:36:47 2022-10-23 09:14:44 2022-10-23 09:45:07 0:30:23 0:22:55 0:07:28 smithi main rhel 8.4 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{default} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-hybrid rados supported-random-distro$/{rhel_8} thrashers/none thrashosds-health workloads/small-objects} 2
fail 7077701 2022-10-23 03:36:48 2022-10-23 09:16:15 2022-10-23 09:31:30 0:15:15 0:08:07 0:07:08 smithi main rhel 8.4 rados/cephadm/smoke/{0-nvme-loop distro/rhel_8.4_container_tools_3.0 fixed-2 mon_election/connectivity start} 2
Failure Reason:

Command failed on smithi050 with status 5: 'sudo systemctl stop ceph-3581c3de-52b5-11ed-8438-001a4aab830c@mon.a'

pass 7077702 2022-10-23 03:36:49 2022-10-23 09:16:35 2022-10-23 09:33:52 0:17:17 0:11:02 0:06:15 smithi main centos 8.stream rados/singleton-nomsgr/{all/ceph-kvstore-tool mon_election/classic rados supported-random-distro$/{centos_8}} 1
fail 7077703 2022-10-23 03:36:51 2022-10-23 09:16:36 2022-10-23 09:47:30 0:30:54 0:19:48 0:11:06 smithi main ubuntu 20.04 rados/cephadm/upgrade/{1-start-distro/1-start-ubuntu_20.04 2-repo_digest/repo_digest 3-upgrade/simple 4-wait 5-upgrade-ls mon_election/connectivity} 2
Failure Reason:

Command failed on smithi043 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image docker.io/ceph/ceph:v15.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 6c407b04-52b5-11ed-8438-001a4aab830c -e sha1=d43ef73d3699233fe79a16a2a64561a856f3e0cd -- bash -c \'ceph versions | jq -e \'"\'"\'.overall | keys\'"\'"\' | grep $sha1\''

pass 7077704 2022-10-23 03:36:52 2022-10-23 09:18:16 2022-10-23 09:57:18 0:39:02 0:27:47 0:11:15 smithi main ubuntu 20.04 rados/singleton/{all/thrash-eio mon_election/classic msgr-failures/many msgr/async objectstore/bluestore-hybrid rados supported-random-distro$/{ubuntu_latest}} 2
pass 7077705 2022-10-23 03:36:53 2022-10-23 09:19:37 2022-10-23 09:39:44 0:20:07 0:10:33 0:09:34 smithi main ubuntu 20.04 rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/classic msgr-failures/osd-delay objectstore/filestore-xfs rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{ubuntu_latest} thrashers/pggrow thrashosds-health workloads/ec-rados-plugin=lrc-k=4-m=2-l=3} 3
pass 7077706 2022-10-23 03:36:54 2022-10-23 09:20:08 2022-10-23 09:55:02 0:34:54 0:24:30 0:10:24 smithi main ubuntu 20.04 rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/osd-delay objectstore/filestore-xfs rados recovery-overrides/{more-async-recovery} supported-random-distro$/{ubuntu_latest} thrashers/pggrow thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} 2
pass 7077707 2022-10-23 03:36:55 2022-10-23 09:21:38 2022-10-23 09:42:23 0:20:45 0:14:30 0:06:15 smithi main centos 8.stream rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-bitmap rados tasks/mon_recovery validater/lockdep} 2
fail 7077708 2022-10-23 03:36:56 2022-10-23 09:21:39 2022-10-23 09:34:36 0:12:57 0:06:52 0:06:05 smithi main centos 8.stream rados/cephadm/smoke-roleless/{0-distro/centos_8.stream_container_tools 0-nvme-loop 1-start 2-services/nfs-ingress2 3-final} 2
Failure Reason:

Command failed on smithi033 with status 5: 'sudo systemctl stop ceph-cd2dc638-52b5-11ed-8438-001a4aab830c@mon.smithi033'

pass 7077709 2022-10-23 03:36:58 2022-10-23 09:21:39 2022-10-23 10:03:28 0:41:49 0:31:33 0:10:16 smithi main ubuntu 20.04 rados/singleton-bluestore/{all/cephtool mon_election/connectivity msgr-failures/none msgr/async-v1only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{ubuntu_latest}} 1
fail 7077710 2022-10-23 03:36:59 2022-10-23 09:21:49 2022-10-23 09:40:57 0:19:08 0:10:07 0:09:01 smithi main centos 8.stream rados/cephadm/thrash/{0-distro/centos_8.stream_container_tools 1-start 2-thrash 3-tasks/small-objects fixed-2 msgr/async-v2only root} 2
Failure Reason:

Command failed on smithi005 with status 5: 'sudo systemctl stop ceph-967efd2c-52b6-11ed-8438-001a4aab830c@mon.a'

pass 7077711 2022-10-23 03:37:00 2022-10-23 09:23:00 2022-10-23 10:08:09 0:45:09 0:34:06 0:11:03 smithi main ubuntu 20.04 rados/mgr/{clusters/{2-node-mgr} debug/mgr mgr_ttl_cache/enable mon_election/classic objectstore/bluestore-low-osd-mem-target supported-random-distro$/{ubuntu_latest} tasks/module_selftest} 2
fail 7077712 2022-10-23 03:37:01 2022-10-23 09:24:31 2022-10-23 09:38:49 0:14:18 0:07:06 0:07:12 smithi main ubuntu 18.04 rados/cephadm/with-work/{0-distro/ubuntu_18.04 fixed-2 mode/root mon_election/connectivity msgr/async-v2only start tasks/rados_api_tests} 2
Failure Reason:

Command failed on smithi049 with status 5: 'sudo systemctl stop ceph-607fefd8-52b6-11ed-8438-001a4aab830c@mon.a'

fail 7077713 2022-10-23 03:37:03 2022-10-23 09:24:31 2022-10-23 09:53:15 0:28:44 0:21:42 0:07:02 smithi main centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{pg-warn whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-inline/yes 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

Command failed on smithi052 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image docker.io/ceph/ceph:v16.2.4 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 92cc745c-52b6-11ed-8438-001a4aab830c -e sha1=d43ef73d3699233fe79a16a2a64561a856f3e0cd -- bash -c \'ceph versions | jq -e \'"\'"\'.overall | keys\'"\'"\' | grep $sha1\''

pass 7077714 2022-10-23 03:37:04 2022-10-23 09:24:31 2022-10-23 09:41:29 0:16:58 0:11:07 0:05:51 smithi main rhel 8.4 rados/singleton-nomsgr/{all/ceph-post-file mon_election/connectivity rados supported-random-distro$/{rhel_8}} 1
pass 7077715 2022-10-23 03:37:05 2022-10-23 09:24:32 2022-10-23 09:58:17 0:33:45 0:24:01 0:09:44 smithi main ubuntu 20.04 rados/perf/{ceph mon_election/classic objectstore/bluestore-low-osd-mem-target openstack scheduler/dmclock_default_shards settings/optimized ubuntu_latest workloads/radosbench_omap_write} 1
pass 7077716 2022-10-23 03:37:06 2022-10-23 09:25:02 2022-10-23 10:01:07 0:36:05 0:28:21 0:07:44 smithi main centos 8.stream rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/connectivity msgr-failures/osd-delay msgr/async-v2only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{centos_8} thrashers/pggrow thrashosds-health workloads/snaps-few-objects-balanced} 2
pass 7077717 2022-10-23 03:37:07 2022-10-23 09:25:53 2022-10-23 09:56:35 0:30:42 0:24:03 0:06:39 smithi main rhel 8.4 rados/singleton/{all/thrash-rados/{thrash-rados thrashosds-health} mon_election/connectivity msgr-failures/none msgr/async-v1only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{rhel_8}} 2
pass 7077718 2022-10-23 03:37:09 2022-10-23 09:25:53 2022-10-23 10:00:12 0:34:19 0:26:45 0:07:34 smithi main centos 8.stream rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/many msgr/async-v1only objectstore/filestore-xfs rados supported-random-distro$/{centos_8} tasks/rados_workunit_loadgen_mostlyread} 2
fail 7077719 2022-10-23 03:37:10 2022-10-23 09:27:05 2022-10-23 09:42:33 0:15:28 0:05:18 0:10:10 smithi main ubuntu 20.04 rados/cephadm/osds/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-ops/rm-zap-wait} 2
Failure Reason:

Command failed on smithi094 with status 5: 'sudo systemctl stop ceph-d24b8a28-52b6-11ed-8438-001a4aab830c@mon.smithi094'

fail 7077720 2022-10-23 03:37:11 2022-10-23 09:27:05 2022-10-23 09:41:12 0:14:07 0:06:44 0:07:23 smithi main ubuntu 18.04 rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/nautilus backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{ubuntu_18.04} mon_election/connectivity msgr-failures/osd-delay rados thrashers/careful thrashosds-health workloads/radosbench} 3
Failure Reason:

Command failed on smithi170 with status 5: 'sudo systemctl stop ceph-a8c9c570-52b6-11ed-8438-001a4aab830c@mon.a'

pass 7077721 2022-10-23 03:37:12 2022-10-23 09:27:46 2022-10-23 09:51:24 0:23:38 0:12:43 0:10:55 smithi main ubuntu 20.04 rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/classic msgr-failures/osd-dispatch-delay objectstore/bluestore-low-osd-mem-target rados recovery-overrides/{more-async-recovery} supported-random-distro$/{ubuntu_latest} thrashers/default thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} 4
pass 7077722 2022-10-23 03:37:14 2022-10-23 09:27:47 2022-10-23 09:43:53 0:16:06 0:08:03 0:08:03 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/classic task/test_cephadm_repos} 1
fail 7077723 2022-10-23 03:37:15 2022-10-23 09:28:47 2022-10-23 09:43:53 0:15:06 0:08:14 0:06:52 smithi main rhel 8.4 rados/cephadm/smoke-roleless/{0-distro/rhel_8.4_container_tools_3.0 0-nvme-loop 1-start 2-services/nfs 3-final} 2
Failure Reason:

Command failed on smithi121 with status 5: 'sudo systemctl stop ceph-f5100156-52b6-11ed-8438-001a4aab830c@mon.smithi121'

pass 7077724 2022-10-23 03:37:16 2022-10-23 09:28:48 2022-10-23 09:47:02 0:18:14 0:07:54 0:10:20 smithi main ubuntu 20.04 rados/multimon/{clusters/3 mon_election/connectivity msgr-failures/few msgr/async-v1only no_pools objectstore/bluestore-comp-zlib rados supported-random-distro$/{ubuntu_latest} tasks/mon_clock_with_skews} 2
pass 7077725 2022-10-23 03:37:17 2022-10-23 09:29:08 2022-10-23 10:01:59 0:32:51 0:22:08 0:10:43 smithi main ubuntu 20.04 rados/singleton/{all/thrash_cache_writeback_proxy_none mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{ubuntu_latest}} 2
fail 7077726 2022-10-23 03:37:19 2022-10-23 09:30:19 2022-10-23 09:46:51 0:16:32 0:08:10 0:08:22 smithi main rhel 8.4 rados/cephadm/smoke/{0-nvme-loop distro/rhel_8.4_container_tools_rhel8 fixed-2 mon_election/classic start} 2
Failure Reason:

Command failed on smithi012 with status 5: 'sudo systemctl stop ceph-5dfd84ea-52b7-11ed-8438-001a4aab830c@mon.a'

pass 7077727 2022-10-23 03:37:20 2022-10-23 09:31:09 2022-10-23 09:50:38 0:19:29 0:12:19 0:07:10 smithi main rhel 8.4 rados/singleton-nomsgr/{all/export-after-evict mon_election/classic rados supported-random-distro$/{rhel_8}} 1
pass 7077728 2022-10-23 03:37:21 2022-10-23 09:31:40 2022-10-23 12:13:11 2:41:31 2:31:22 0:10:09 smithi main ubuntu 20.04 rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/fast mon_election/classic msgr-failures/osd-delay objectstore/bluestore-stupid rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{ubuntu_latest} thrashers/minsize_recovery thrashosds-health workloads/ec-radosbench} 2
fail 7077729 2022-10-23 03:37:22 2022-10-23 09:31:50 2022-10-23 09:59:05 0:27:15 0:21:01 0:06:14 smithi main centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{pg-warn whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-inline/no 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

Command failed on smithi142 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image docker.io/ceph/ceph:v16.2.4 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid abaec096-52b7-11ed-8438-001a4aab830c -e sha1=d43ef73d3699233fe79a16a2a64561a856f3e0cd -- bash -c \'ceph versions | jq -e \'"\'"\'.overall | keys\'"\'"\' | grep $sha1\''

pass 7077730 2022-10-23 03:37:24 2022-10-23 09:32:01 2022-10-23 10:15:43 0:43:42 0:33:23 0:10:19 smithi main ubuntu 20.04 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{default} 3-scrub-overrides/{default} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/classic msgr-failures/osd-dispatch-delay msgr/async objectstore/bluestore-stupid rados supported-random-distro$/{ubuntu_latest} thrashers/careful thrashosds-health workloads/snaps-few-objects-localized} 2
fail 7077731 2022-10-23 03:37:25 2022-10-23 09:32:11 2022-10-23 09:50:38 0:18:27 0:09:40 0:08:47 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/connectivity task/test_nfs} 1
Failure Reason:

Command failed on smithi167 with status 5: 'sudo systemctl stop ceph-e00d2b84-52b7-11ed-8438-001a4aab830c@mon.a'

fail 7077732 2022-10-23 03:37:26 2022-10-23 09:33:22 2022-10-23 09:48:08 0:14:46 0:07:31 0:07:15 smithi main ubuntu 18.04 rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/octopus backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{ubuntu_18.04} mon_election/classic msgr-failures/fastclose rados thrashers/default thrashosds-health workloads/rbd_cls} 3
Failure Reason:

Command failed on smithi032 with status 5: 'sudo systemctl stop ceph-bb6a3736-52b7-11ed-8438-001a4aab830c@mon.a'

pass 7077733 2022-10-23 03:37:27 2022-10-23 09:34:33 2022-10-23 09:57:26 0:22:53 0:16:07 0:06:46 smithi main centos 8.stream rados/objectstore/{backends/keyvaluedb supported-random-distro$/{centos_8}} 1
pass 7077734 2022-10-23 03:37:29 2022-10-23 09:34:33 2022-10-23 10:05:36 0:31:03 0:25:13 0:05:50 smithi main centos 8.stream rados/valgrind-leaks/{1-start 2-inject-leak/osd centos_latest} 1
pass 7077735 2022-10-23 03:37:30 2022-10-23 09:34:43 2022-10-23 09:57:04 0:22:21 0:11:25 0:10:56 smithi main ubuntu 20.04 rados/monthrash/{ceph clusters/3-mons mon_election/connectivity msgr-failures/mon-delay msgr/async objectstore/bluestore-stupid rados supported-random-distro$/{ubuntu_latest} thrashers/sync workloads/rados_5925} 2
fail 7077736 2022-10-23 03:37:31 2022-10-23 09:35:04 2022-10-23 09:50:11 0:15:07 0:08:04 0:07:03 smithi main rhel 8.4 rados/cephadm/smoke-roleless/{0-distro/rhel_8.4_container_tools_rhel8 0-nvme-loop 1-start 2-services/nfs2 3-final} 2
Failure Reason:

Command failed on smithi079 with status 5: 'sudo systemctl stop ceph-d2a36db4-52b7-11ed-8438-001a4aab830c@mon.smithi079'

pass 7077737 2022-10-23 03:37:32 2022-10-23 09:35:04 2022-10-23 09:52:22 0:17:18 0:08:18 0:09:00 smithi main ubuntu 20.04 rados/singleton/{all/watch-notify-same-primary mon_election/connectivity msgr-failures/many msgr/async objectstore/filestore-xfs rados supported-random-distro$/{ubuntu_latest}} 1
fail 7077738 2022-10-23 03:37:34 2022-10-23 09:35:05 2022-10-23 09:53:06 0:18:01 0:10:17 0:07:44 smithi main centos 8.stream rados/cephadm/thrash/{0-distro/centos_8.stream_container_tools 1-start 2-thrash 3-tasks/snaps-few-objects fixed-2 msgr/async root} 2
Failure Reason:

Command failed on smithi078 with status 5: 'sudo systemctl stop ceph-4ec902dc-52b8-11ed-8438-001a4aab830c@mon.a'

pass 7077739 2022-10-23 03:37:35 2022-10-23 09:35:25 2022-10-23 09:54:02 0:18:37 0:12:31 0:06:06 smithi main rhel 8.4 rados/singleton-nomsgr/{all/full-tiering mon_election/connectivity rados supported-random-distro$/{rhel_8}} 1
fail 7077740 2022-10-23 03:37:36 2022-10-23 09:35:26 2022-10-23 09:56:35 0:21:09 0:07:41 0:13:28 smithi main ubuntu 20.04 rados/cephadm/with-work/{0-distro/ubuntu_20.04 fixed-2 mode/packaged mon_election/classic msgr/async start tasks/rados_python} 2
Failure Reason:

Command failed on smithi066 with status 5: 'sudo systemctl stop ceph-a6e2820e-52b8-11ed-8438-001a4aab830c@mon.a'

pass 7077741 2022-10-23 03:37:37 2022-10-23 09:35:56 2022-10-23 09:54:51 0:18:55 0:10:14 0:08:41 smithi main ubuntu 20.04 rados/perf/{ceph mon_election/connectivity objectstore/bluestore-stupid openstack scheduler/wpq_default_shards settings/optimized ubuntu_latest workloads/sample_fio} 1
fail 7077742 2022-10-23 03:37:38 2022-10-23 09:35:57 2022-10-23 09:52:04 0:16:07 0:07:19 0:08:48 smithi main centos 8.stream rados/cephadm/osds/{0-distro/centos_8.stream_container_tools 0-nvme-loop 1-start 2-ops/rmdir-reactivate} 2
Failure Reason:

Command failed on smithi080 with status 5: 'sudo systemctl stop ceph-0489c71a-52b8-11ed-8438-001a4aab830c@mon.smithi080'

pass 7077743 2022-10-23 03:37:40 2022-10-23 09:36:57 2022-10-23 10:14:29 0:37:32 0:30:04 0:07:28 smithi main rhel 8.4 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/connectivity msgr-failures/fastclose msgr/async-v1only objectstore/filestore-xfs rados supported-random-distro$/{rhel_8} thrashers/default thrashosds-health workloads/snaps-few-objects} 2
fail 7077744 2022-10-23 03:37:41 2022-10-23 09:37:08 2022-10-23 09:48:09 0:11:01 0:05:33 0:05:28 smithi main ubuntu 18.04 rados/cephadm/smoke/{0-nvme-loop distro/ubuntu_18.04 fixed-2 mon_election/connectivity start} 2
Failure Reason:

Command failed on smithi017 with status 5: 'sudo systemctl stop ceph-bd240610-52b7-11ed-8438-001a4aab830c@mon.a'

pass 7077745 2022-10-23 03:37:42 2022-10-23 09:37:08 2022-10-23 10:26:16 0:49:08 0:36:44 0:12:24 smithi main ubuntu 20.04 rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/connectivity msgr-failures/osd-dispatch-delay objectstore/bluestore-bitmap rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{ubuntu_latest} thrashers/pggrow thrashosds-health workloads/ec-rados-plugin=jerasure-k=4-m=2} 3
pass 7077746 2022-10-23 03:37:43 2022-10-23 09:38:39 2022-10-23 10:21:12 0:42:33 0:31:10 0:11:23 smithi main ubuntu 20.04 rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/osd-dispatch-delay objectstore/bluestore-bitmap rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{ubuntu_latest} thrashers/pggrow thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} 2
pass 7077747 2022-10-23 03:37:44 2022-10-23 09:38:59 2022-10-23 10:41:11 1:02:12 0:54:41 0:07:31 smithi main centos 8.stream rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async-v1only objectstore/bluestore-comp-lz4 rados tasks/rados_api_tests validater/valgrind} 2
pass 7077748 2022-10-23 03:37:46 2022-10-23 09:39:51 2022-10-23 09:59:19 0:19:28 0:08:51 0:10:37 smithi main ubuntu 20.04 rados/singleton/{all/admin-socket mon_election/classic msgr-failures/many msgr/async-v1only objectstore/bluestore-bitmap rados supported-random-distro$/{ubuntu_latest}} 1
pass 7077749 2022-10-23 03:37:47 2022-10-23 09:39:52 2022-10-23 10:02:55 0:23:03 0:15:35 0:07:28 smithi main rhel 8.4 rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-bitmap rados supported-random-distro$/{rhel_8} tasks/readwrite} 2
fail 7077750 2022-10-23 03:37:48 2022-10-23 09:41:03 2022-10-23 09:54:37 0:13:34 0:05:47 0:07:47 smithi main ubuntu 18.04 rados/cephadm/smoke-roleless/{0-distro/ubuntu_18.04 0-nvme-loop 1-start 2-services/rgw-ingress 3-final} 2
Failure Reason:

Command failed on smithi187 with status 5: 'sudo systemctl stop ceph-64b06bf8-52b8-11ed-8438-001a4aab830c@mon.smithi187'

pass 7077751 2022-10-23 03:37:49 2022-10-23 09:41:23 2022-10-23 10:14:16 0:32:53 0:26:19 0:06:34 smithi main rhel 8.4 rados/mgr/{clusters/{2-node-mgr} debug/mgr mgr_ttl_cache/disable mon_election/connectivity objectstore/bluestore-stupid supported-random-distro$/{rhel_8} tasks/progress} 2
fail 7077752 2022-10-23 03:37:51 2022-10-23 09:41:24 2022-10-23 10:07:17 0:25:53 0:17:54 0:07:59 smithi main centos 8.stream rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.stream_container_tools 1-bootstrap/16.2.5 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
Failure Reason:

Command failed on smithi003 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image docker.io/ceph/ceph:v16.2.5 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid c6ed0c7c-52b8-11ed-8438-001a4aab830c -e sha1=d43ef73d3699233fe79a16a2a64561a856f3e0cd -- bash -c \'ceph versions | jq -e \'"\'"\'.overall | keys\'"\'"\' | grep $sha1\''

pass 7077753 2022-10-23 03:37:52 2022-10-23 09:42:25 2022-10-23 10:01:32 0:19:07 0:12:34 0:06:33 smithi main centos 8.stream rados/singleton-nomsgr/{all/health-warnings mon_election/classic rados supported-random-distro$/{centos_8}} 1
fail 7077754 2022-10-23 03:37:53 2022-10-23 09:42:25 2022-10-23 09:59:34 0:17:09 0:10:10 0:06:59 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/classic task/test_orch_cli} 1
Failure Reason:

Command failed on smithi191 with status 5: 'sudo systemctl stop ceph-3272ea70-52b9-11ed-8438-001a4aab830c@mon.a'

pass 7077755 2022-10-23 03:37:54 2022-10-23 09:42:35 2022-10-23 10:20:45 0:38:10 0:28:34 0:09:36 smithi main ubuntu 20.04 rados/thrash-erasure-code-overwrites/{bluestore-bitmap ceph clusters/{fixed-2 openstack} fast/fast mon_election/classic msgr-failures/osd-dispatch-delay rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{ubuntu_latest} thrashers/fastread thrashosds-health workloads/ec-snaps-few-objects-overwrites} 2
pass 7077756 2022-10-23 03:37:55 2022-10-23 09:42:56 2022-10-23 13:03:55 3:20:59 3:15:22 0:05:37 smithi main rhel 8.4 rados/standalone/{supported-random-distro$/{rhel_8} workloads/osd} 1
fail 7077757 2022-10-23 03:37:57 2022-10-23 09:42:56 2022-10-23 10:08:50 0:25:54 0:17:30 0:08:24 smithi main centos 8.stream rados/cephadm/upgrade/{1-start-distro/1-start-centos_8.stream_container-tools 2-repo_digest/repo_digest 3-upgrade/simple 4-wait 5-upgrade-ls mon_election/classic} 2
Failure Reason:

Command failed on smithi121 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image docker.io/ceph/ceph:v15.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f59b8d00-52b8-11ed-8438-001a4aab830c -e sha1=d43ef73d3699233fe79a16a2a64561a856f3e0cd -- bash -c \'ceph versions | jq -e \'"\'"\'.overall | keys\'"\'"\' | grep $sha1\''

pass 7077758 2022-10-23 03:37:58 2022-10-23 09:43:57 2022-10-23 10:23:32 0:39:35 0:30:47 0:08:48 smithi main ubuntu 20.04 rados/singleton/{all/backfill-toofull mon_election/connectivity msgr-failures/none msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{ubuntu_latest}} 1
fail 7077759 2022-10-23 03:37:59 2022-10-23 09:43:58 2022-10-23 10:00:58 0:17:00 0:06:36 0:10:24 smithi main ubuntu 18.04 rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/luminous-v1only backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{ubuntu_18.04} mon_election/connectivity msgr-failures/few rados thrashers/mapgap thrashosds-health workloads/snaps-few-objects} 3
Failure Reason:

Command failed on smithi012 with status 5: 'sudo systemctl stop ceph-6636162a-52b9-11ed-8438-001a4aab830c@mon.a'

pass 7077760 2022-10-23 03:38:00 2022-10-23 09:46:59 2022-10-23 10:12:39 0:25:40 0:18:12 0:07:28 smithi main rhel 8.4 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-bitmap rados supported-random-distro$/{rhel_8} thrashers/mapgap thrashosds-health workloads/write_fadvise_dontneed} 2
fail 7077761 2022-10-23 03:38:02 2022-10-23 09:47:10 2022-10-23 09:59:47 0:12:37 0:06:46 0:05:51 smithi main ubuntu 18.04 rados/rook/smoke/{0-distro/ubuntu_18.04 0-kubeadm 1-rook 2-workload/none 3-final cluster/1-node k8s/1.21 net/calico rook/1.6.2} 1
Failure Reason:

Command failed on smithi036 with status 1: 'kubectl create -f rook/cluster/examples/kubernetes/ceph/crds.yaml -f rook/cluster/examples/kubernetes/ceph/common.yaml -f operator.yaml'

fail 7077762 2022-10-23 03:38:03 2022-10-23 09:47:10 2022-10-23 10:16:11 0:29:01 0:22:17 0:06:44 smithi main centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{pg-warn whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-inline/yes 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

Command failed on smithi043 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image docker.io/ceph/ceph:v16.2.4 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d0fc9d6c-52b9-11ed-8438-001a4aab830c -e sha1=d43ef73d3699233fe79a16a2a64561a856f3e0cd -- bash -c \'ceph versions | jq -e \'"\'"\'.overall | keys\'"\'"\' | grep $sha1\''

pass 7077763 2022-10-23 03:38:04 2022-10-23 09:47:41 2022-10-23 10:09:18 0:21:37 0:14:52 0:06:45 smithi main centos 8.stream rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/connectivity msgr-failures/fastclose objectstore/bluestore-stupid rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{centos_8} thrashers/careful thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} 4
fail 7077764 2022-10-23 03:38:05 2022-10-23 09:48:12 2022-10-23 10:05:33 0:17:21 0:05:33 0:11:48 smithi main ubuntu 20.04 rados/cephadm/smoke-roleless/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-services/rgw 3-final} 2
Failure Reason:

Command failed on smithi079 with status 5: 'sudo systemctl stop ceph-0efd01f6-52ba-11ed-8438-001a4aab830c@mon.smithi079'

pass 7077765 2022-10-23 03:38:06 2022-10-23 09:50:13 2022-10-23 10:08:35 0:18:22 0:12:54 0:05:28 smithi main rhel 8.4 rados/singleton-nomsgr/{all/large-omap-object-warnings mon_election/connectivity rados supported-random-distro$/{rhel_8}} 1
fail 7077766 2022-10-23 03:38:08 2022-10-23 09:50:13 2022-10-23 10:06:41 0:16:28 0:05:41 0:10:47 smithi main ubuntu 20.04 rados/cephadm/smoke/{0-nvme-loop distro/ubuntu_20.04 fixed-2 mon_election/classic start} 2
Failure Reason:

Command failed on smithi134 with status 5: 'sudo systemctl stop ceph-48417668-52ba-11ed-8438-001a4aab830c@mon.a'

pass 7077767 2022-10-23 03:38:09 2022-10-23 09:50:24 2022-10-23 10:13:52 0:23:28 0:16:56 0:06:32 smithi main rhel 8.4 rados/multimon/{clusters/6 mon_election/classic msgr-failures/many msgr/async-v2only no_pools objectstore/bluestore-comp-zstd rados supported-random-distro$/{rhel_8} tasks/mon_recovery} 2
pass 7077768 2022-10-23 03:38:10 2022-10-23 09:50:44 2022-10-23 10:11:00 0:20:16 0:09:02 0:11:14 smithi main ubuntu 20.04 rados/singleton/{all/deduptool mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{ubuntu_latest}} 1
fail 7077769 2022-10-23 03:38:11 2022-10-23 10:04:45 2022-10-23 10:24:29 0:19:44 0:10:00 0:09:44 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/connectivity task/test_orch_cli_mon} 5
Failure Reason:

Command failed on smithi079 with status 5: 'sudo systemctl stop ceph-a5469396-52bc-11ed-8438-001a4aab830c@mon.a'

pass 7077770 2022-10-23 03:38:13 2022-10-23 10:06:46 2022-10-23 10:36:21 0:29:35 0:23:18 0:06:17 smithi main rhel 8.4 rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/normal mon_election/connectivity msgr-failures/osd-dispatch-delay objectstore/filestore-xfs rados recovery-overrides/{more-active-recovery} supported-random-distro$/{rhel_8} thrashers/morepggrow thrashosds-health workloads/ec-small-objects-balanced} 2
fail 7077771 2022-10-23 03:38:14 2022-10-23 10:07:27 2022-10-23 10:21:37 0:14:10 0:06:33 0:07:37 smithi main centos 8.stream rados/cephadm/osds/{0-distro/centos_8.stream_container_tools 0-nvme-loop 1-start 2-ops/repave-all} 2
Failure Reason:

Command failed on smithi186 with status 5: 'sudo systemctl stop ceph-536dd610-52bc-11ed-8438-001a4aab830c@mon.smithi186'

pass 7077772 2022-10-23 03:38:15 2022-10-23 10:08:17 2022-10-23 10:25:35 0:17:18 0:08:10 0:09:08 smithi main ubuntu 20.04 rados/perf/{ceph mon_election/classic objectstore/bluestore-basic-min-osd-mem-target openstack scheduler/dmclock_1Shard_16Threads settings/optimized ubuntu_latest workloads/sample_radosbench} 1
pass 7077773 2022-10-23 03:38:16 2022-10-23 10:08:38 2022-10-23 10:42:11 0:33:33 0:23:09 0:10:24 smithi main ubuntu 20.04 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/connectivity msgr-failures/osd-delay msgr/async objectstore/bluestore-comp-lz4 rados supported-random-distro$/{ubuntu_latest} thrashers/morepggrow thrashosds-health workloads/admin_socket_objecter_requests} 2
fail 7077774 2022-10-23 03:38:17 2022-10-23 10:08:58 2022-10-23 10:22:44 0:13:46 0:06:38 0:07:08 smithi main centos 8.stream rados/cephadm/smoke-roleless/{0-distro/centos_8.stream_container_tools 0-nvme-loop 1-start 2-services/basic 3-final} 2
Failure Reason:

Command failed on smithi061 with status 5: 'sudo systemctl stop ceph-7e6d136c-52bc-11ed-8438-001a4aab830c@mon.smithi061'

fail 7077775 2022-10-23 03:38:19 2022-10-23 10:09:29 2022-10-23 10:26:05 0:16:36 0:09:46 0:06:50 smithi main centos 8.stream rados/cephadm/thrash/{0-distro/centos_8.stream_container_tools 1-start 2-thrash 3-tasks/rados_api_tests fixed-2 msgr/async-v1only root} 2
Failure Reason:

Command failed on smithi017 with status 5: 'sudo systemctl stop ceph-d9a9cf86-52bc-11ed-8438-001a4aab830c@mon.a'

pass 7077776 2022-10-23 03:38:20 2022-10-23 10:09:29 2022-10-23 10:30:20 0:20:51 0:11:31 0:09:20 smithi main centos 8.stream rados/singleton-nomsgr/{all/lazy_omap_stats_output mon_election/classic rados supported-random-distro$/{centos_8}} 1
pass 7077777 2022-10-23 03:38:21 2022-10-23 10:11:10 2022-10-23 10:48:06 0:36:56 0:28:39 0:08:17 smithi main centos 8.stream rados/monthrash/{ceph clusters/9-mons mon_election/classic msgr-failures/few msgr/async-v1only objectstore/filestore-xfs rados supported-random-distro$/{centos_8} thrashers/force-sync-many workloads/rados_api_tests} 2
fail 7077778 2022-10-23 03:38:22 2022-10-23 10:12:40 2022-10-23 10:30:14 0:17:34 0:09:24 0:08:10 smithi main centos 8.stream rados/cephadm/with-work/{0-distro/centos_8.stream_container_tools fixed-2 mode/packaged mon_election/classic msgr/async-v1only start tasks/rados_api_tests} 2
Failure Reason:

Command failed on smithi074 with status 5: 'sudo systemctl stop ceph-5e98687e-52bd-11ed-8438-001a4aab830c@mon.a'

pass 7077779 2022-10-23 03:38:24 2022-10-23 10:13:31 2022-10-23 10:33:16 0:19:45 0:12:30 0:07:15 smithi main centos 8.stream rados/singleton/{all/divergent_priors mon_election/connectivity msgr-failures/many msgr/async-v1only objectstore/bluestore-comp-zlib rados supported-random-distro$/{centos_8}} 1
pass 7077780 2022-10-23 03:38:25 2022-10-23 10:14:01 2022-10-23 10:39:21 0:25:20 0:18:42 0:06:38 smithi main rhel 8.4 rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/many msgr/async objectstore/bluestore-comp-lz4 rados supported-random-distro$/{rhel_8} tasks/repair_test} 2
fail 7077781 2022-10-23 03:38:26 2022-10-23 10:14:22 2022-10-23 10:28:19 0:13:57 0:06:06 0:07:51 smithi main ubuntu 18.04 rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/luminous backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{ubuntu_18.04} mon_election/classic msgr-failures/osd-delay rados thrashers/morepggrow thrashosds-health workloads/test_rbd_api} 3
Failure Reason:

Command failed on smithi062 with status 5: 'sudo systemctl stop ceph-257c169e-52bd-11ed-8438-001a4aab830c@mon.a'

pass 7077782 2022-10-23 03:38:27 2022-10-23 10:14:32 2022-10-23 10:49:09 0:34:37 0:27:04 0:07:33 smithi main rhel 8.4 rados/objectstore/{backends/objectcacher-stress supported-random-distro$/{rhel_8}} 1
fail 7077783 2022-10-23 03:38:29 2022-10-23 10:15:53 2022-10-23 10:42:43 0:26:50 0:20:36 0:06:14 smithi main centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{pg-warn whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-inline/no 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

Command failed on smithi043 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image docker.io/ceph/ceph:v16.2.4 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid b9e8adb0-52bd-11ed-8438-001a4aab830c -e sha1=d43ef73d3699233fe79a16a2a64561a856f3e0cd -- bash -c \'ceph versions | jq -e \'"\'"\'.overall | keys\'"\'"\' | grep $sha1\''

fail 7077784 2022-10-23 03:38:30 2022-10-23 10:16:13 2022-10-23 10:29:55 0:13:42 0:06:40 0:07:02 smithi main centos 8.stream rados/cephadm/smoke/{0-nvme-loop distro/centos_8.stream_container_tools fixed-2 mon_election/connectivity start} 2
Failure Reason:

Command failed on smithi050 with status 5: 'sudo systemctl stop ceph-80192524-52bd-11ed-8438-001a4aab830c@mon.a'

pass 7077785 2022-10-23 03:38:31 2022-10-23 10:17:04 2022-10-23 10:43:11 0:26:07 0:17:54 0:08:13 smithi main centos 8.stream rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-snappy rados tasks/rados_cls_all validater/lockdep} 2
pass 7077786 2022-10-23 03:38:32 2022-10-23 10:18:14 2022-10-23 10:38:46 0:20:32 0:13:11 0:07:21 smithi main centos 8.stream rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/classic msgr-failures/fastclose objectstore/bluestore-comp-lz4 rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{centos_8} thrashers/careful thrashosds-health workloads/ec-rados-plugin=lrc-k=4-m=2-l=3} 3
pass 7077787 2022-10-23 03:38:33 2022-10-23 10:19:45 2022-10-23 10:57:56 0:38:11 0:27:54 0:10:17 smithi main ubuntu 20.04 rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/fastclose objectstore/bluestore-comp-lz4 rados recovery-overrides/{default} supported-random-distro$/{ubuntu_latest} thrashers/careful thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} 2
pass 7077788 2022-10-23 03:38:35 2022-10-23 10:19:45 2022-10-23 11:14:04 0:54:19 0:47:35 0:06:44 smithi main rhel 8.4 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-partial-recovery} 3-scrub-overrides/{default} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/classic msgr-failures/osd-dispatch-delay msgr/async-v1only objectstore/bluestore-comp-snappy rados supported-random-distro$/{rhel_8} thrashers/none thrashosds-health workloads/cache-agent-big} 2
pass 7077789 2022-10-23 03:38:36 2022-10-23 10:20:46 2022-10-23 10:44:02 0:23:16 0:16:40 0:06:36 smithi main rhel 8.4 rados/singleton-nomsgr/{all/librados_hello_world mon_election/connectivity rados supported-random-distro$/{rhel_8}} 1
fail 7077790 2022-10-23 03:38:37 2022-10-23 10:21:16 2022-10-23 10:33:39 0:12:23 0:04:56 0:07:27 smithi main ubuntu 18.04 rados/cephadm/smoke-singlehost/{0-distro$/{ubuntu_18.04} 1-start 2-services/rgw 3-final} 1
Failure Reason:

Command failed on smithi049 with status 5: 'sudo systemctl stop ceph-045dd776-52be-11ed-8438-001a4aab830c@mon.smithi049'

pass 7077791 2022-10-23 03:38:38 2022-10-23 10:21:17 2022-10-23 10:40:09 0:18:52 0:13:26 0:05:26 smithi main rhel 8.4 rados/singleton/{all/divergent_priors2 mon_election/classic msgr-failures/none msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{rhel_8}} 1
pass 7077792 2022-10-23 03:38:40 2022-10-23 10:21:27 2022-10-23 10:40:41 0:19:14 0:11:34 0:07:40 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/connectivity task/test_adoption} 1
pass 7077793 2022-10-23 03:38:41 2022-10-23 10:21:47 2022-10-23 10:43:45 0:21:58 0:12:15 0:09:43 smithi main ubuntu 20.04 rados/mgr/{clusters/{2-node-mgr} debug/mgr mgr_ttl_cache/enable mon_election/classic objectstore/filestore-xfs supported-random-distro$/{ubuntu_latest} tasks/prometheus} 2
fail 7077794 2022-10-23 03:38:42 2022-10-23 10:22:18 2022-10-23 10:37:34 0:15:16 0:08:18 0:06:58 smithi main rhel 8.4 rados/cephadm/smoke-roleless/{0-distro/rhel_8.4_container_tools_3.0 0-nvme-loop 1-start 2-services/client-keyring 3-final} 2
Failure Reason:

Command failed on smithi084 with status 5: 'sudo systemctl stop ceph-76d7d78e-52be-11ed-8438-001a4aab830c@mon.smithi084'

fail 7077795 2022-10-23 03:38:43 2022-10-23 10:22:28 2022-10-23 10:36:06 0:13:38 0:06:38 0:07:00 smithi main ubuntu 18.04 rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/mimic-v1only backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{ubuntu_18.04} mon_election/connectivity msgr-failures/fastclose rados thrashers/none thrashosds-health workloads/cache-snaps} 3
Failure Reason:

Command failed on smithi053 with status 5: 'sudo systemctl stop ceph-4fe59ee0-52be-11ed-8438-001a4aab830c@mon.a'

fail 7077796 2022-10-23 03:38:44 2022-10-23 10:22:49 2022-10-23 10:40:19 0:17:30 0:10:50 0:06:40 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/classic task/test_cephadm} 1
Failure Reason:

Command failed (workunit test cephadm/test_cephadm.sh) on smithi012 with status 125: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=031d56cfae658907a3f24cb5740764fd798d7d2c TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_cephadm.sh'

pass 7077797 2022-10-23 03:38:46 2022-10-23 10:22:49 2022-10-23 10:44:24 0:21:35 0:14:03 0:07:32 smithi main rhel 8.4 rados/singleton/{all/dump-stuck mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-hybrid rados supported-random-distro$/{rhel_8}} 1
pass 7077798 2022-10-23 03:38:47 2022-10-23 10:23:00 2022-10-23 10:50:46 0:27:46 0:17:56 0:09:50 smithi main ubuntu 20.04 rados/singleton-nomsgr/{all/msgr mon_election/classic rados supported-random-distro$/{ubuntu_latest}} 1
fail 7077799 2022-10-23 03:38:48 2022-10-23 10:23:00 2022-10-23 10:39:17 0:16:17 0:07:59 0:08:18 smithi main rhel 8.4 rados/cephadm/smoke-roleless/{0-distro/rhel_8.4_container_tools_rhel8 0-nvme-loop 1-start 2-services/iscsi 3-final} 2
Failure Reason:

Command failed on smithi142 with status 5: 'sudo systemctl stop ceph-aa174684-52be-11ed-8438-001a4aab830c@mon.smithi142'

pass 7077800 2022-10-23 03:38:49 2022-10-23 10:24:10 2022-10-23 10:43:40 0:19:30 0:10:01 0:09:29 smithi main ubuntu 20.04 rados/perf/{ceph mon_election/connectivity objectstore/bluestore-bitmap openstack scheduler/dmclock_default_shards settings/optimized ubuntu_latest workloads/fio_4K_rand_read} 1
pass 7077801 2022-10-23 03:38:51 2022-10-23 10:24:11 2022-10-23 10:46:13 0:22:02 0:14:39 0:07:23 smithi main centos 8.stream rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/classic msgr-failures/few objectstore/filestore-xfs rados recovery-overrides/{more-async-recovery} supported-random-distro$/{centos_8} thrashers/default thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} 4
pass 7077802 2022-10-23 03:38:52 2022-10-23 10:24:31 2022-10-23 10:48:36 0:24:05 0:16:54 0:07:11 smithi main centos 8.stream rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-partial-recovery} 3-scrub-overrides/{default} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/connectivity msgr-failures/fastclose msgr/async-v2only objectstore/bluestore-comp-zlib rados supported-random-distro$/{centos_8} thrashers/pggrow thrashosds-health workloads/cache-agent-small} 2
fail 7077803 2022-10-23 03:38:53 2022-10-23 10:25:42 2022-10-23 10:40:46 0:15:04 0:09:04 0:06:00 smithi main rhel 8.4 rados/cephadm/smoke/{0-nvme-loop distro/rhel_8.4_container_tools_3.0 fixed-2 mon_election/classic start} 2
Failure Reason:

Command failed on smithi017 with status 5: 'sudo systemctl stop ceph-050b71aa-52bf-11ed-8438-001a4aab830c@mon.a'

fail 7077804 2022-10-23 03:38:54 2022-10-23 10:26:12 2022-10-23 11:00:00 0:33:48 0:23:36 0:10:12 smithi main ubuntu 20.04 rados/cephadm/upgrade/{1-start-distro/1-start-ubuntu_20.04-15.2.9 2-repo_digest/defaut 3-upgrade/staggered 4-wait 5-upgrade-ls mon_election/connectivity} 2
Failure Reason:

Command failed on smithi083 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image docker.io/ceph/ceph:v15.2.9 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 1cde0126-52bf-11ed-8438-001a4aab830c -e sha1=d43ef73d3699233fe79a16a2a64561a856f3e0cd -- bash -c \'ceph versions | jq -e \'"\'"\'.mgr | length == 2\'"\'"\'\''

pass 7077805 2022-10-23 03:38:55 2022-10-23 10:26:23 2022-10-23 10:43:15 0:16:52 0:11:19 0:05:33 smithi main centos 8.stream rados/multimon/{clusters/9 mon_election/connectivity msgr-failures/few msgr/async no_pools objectstore/bluestore-hybrid rados supported-random-distro$/{centos_8} tasks/mon_clock_no_skews} 3
fail 7077806 2022-10-23 03:38:57 2022-10-23 10:26:23 2022-10-23 10:41:44 0:15:21 0:08:33 0:06:48 smithi main rhel 8.4 rados/cephadm/osds/{0-distro/rhel_8.4_container_tools_3.0 0-nvme-loop 1-start 2-ops/rm-zap-add} 2
Failure Reason:

Command failed on smithi179 with status 5: 'sudo systemctl stop ceph-155ad942-52bf-11ed-8438-001a4aab830c@mon.smithi179'

pass 7077807 2022-10-23 03:38:58 2022-10-23 10:26:33 2022-10-23 11:07:58 0:41:25 0:31:29 0:09:56 smithi main ubuntu 20.04 rados/singleton-bluestore/{all/cephtool mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-snappy rados supported-random-distro$/{ubuntu_latest}} 1
pass 7077808 2022-10-23 03:38:59 2022-10-23 10:26:44 2022-10-23 10:58:00 0:31:16 0:25:27 0:05:49 smithi main rhel 8.4 rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/fast mon_election/classic msgr-failures/fastclose objectstore/bluestore-bitmap rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{rhel_8} thrashers/pggrow thrashosds-health workloads/ec-small-objects-fast-read} 2
fail 7077809 2022-10-23 03:39:00 2022-10-23 10:26:54 2022-10-23 10:44:54 0:18:00 0:10:16 0:07:44 smithi main centos 8.stream rados/cephadm/thrash/{0-distro/centos_8.stream_container_tools 1-start 2-thrash 3-tasks/radosbench fixed-2 msgr/async-v2only root} 2
Failure Reason:

Command failed on smithi005 with status 5: 'sudo systemctl stop ceph-8c8ca7fc-52bf-11ed-8438-001a4aab830c@mon.a'

pass 7077810 2022-10-23 03:39:02 2022-10-23 10:27:05 2022-10-23 11:28:02 1:00:57 0:54:59 0:05:58 smithi main rhel 8.4 rados/singleton/{all/ec-inconsistent-hinfo mon_election/classic msgr-failures/many msgr/async-v1only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{rhel_8}} 1
pass 7077811 2022-10-23 03:39:03 2022-10-23 10:27:05 2022-10-23 10:49:51 0:22:46 0:14:10 0:08:36 smithi main rhel 8.4 rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-comp-snappy rados supported-random-distro$/{rhel_8} tasks/scrub_test} 2
pass 7077812 2022-10-23 03:39:04 2022-10-23 10:28:26 2022-10-23 10:59:17 0:30:51 0:20:34 0:10:17 smithi main ubuntu 20.04 rados/singleton-nomsgr/{all/multi-backfill-reject mon_election/connectivity rados supported-random-distro$/{ubuntu_latest}} 2
fail 7077813 2022-10-23 03:39:05 2022-10-23 10:30:06 2022-10-23 10:47:09 0:17:03 0:11:22 0:05:41 smithi main rhel 8.4 rados/cephadm/with-work/{0-distro/rhel_8.4_container_tools_3.0 fixed-2 mode/root mon_election/connectivity msgr/async-v2only start tasks/rados_python} 2
Failure Reason:

Command failed on smithi074 with status 5: 'sudo systemctl stop ceph-f675222a-52bf-11ed-8438-001a4aab830c@mon.a'

pass 7077814 2022-10-23 03:39:07 2022-10-23 10:30:17 2022-10-23 11:03:43 0:33:26 0:27:19 0:06:07 smithi main centos 8.stream rados/thrash-erasure-code-overwrites/{bluestore-bitmap ceph clusters/{fixed-2 openstack} fast/normal mon_election/connectivity msgr-failures/fastclose rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{centos_8} thrashers/minsize_recovery thrashosds-health workloads/ec-pool-snaps-few-objects-overwrites} 2
pass 7077815 2022-10-23 03:39:08 2022-10-23 10:30:27 2022-10-23 12:27:03 1:56:36 1:43:10 0:13:26 smithi main ubuntu 20.04 rados/standalone/{supported-random-distro$/{ubuntu_latest} workloads/scrub} 1
fail 7077816 2022-10-23 03:39:09 2022-10-23 10:33:18 2022-10-23 11:03:17 0:29:59 0:21:16 0:08:43 smithi main centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{pg-warn whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-inline/yes 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

Command failed on smithi061 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image docker.io/ceph/ceph:v16.2.4 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 95b4fdba-52c0-11ed-8438-001a4aab830c -e sha1=d43ef73d3699233fe79a16a2a64561a856f3e0cd -- bash -c \'ceph versions | jq -e \'"\'"\'.overall | keys\'"\'"\' | grep $sha1\''

pass 7077817 2022-10-23 03:39:10 2022-10-23 10:36:09 2022-10-23 11:09:50 0:33:41 0:26:17 0:07:24 smithi main centos 8.stream rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-comp-zstd rados supported-random-distro$/{centos_8} thrashers/careful thrashosds-health workloads/cache-pool-snaps-readproxy} 2
fail 7077818 2022-10-23 03:39:11 2022-10-23 10:36:10 2022-10-23 10:47:44 0:11:34 0:04:46 0:06:48 smithi main ubuntu 18.04 rados/cephadm/smoke-roleless/{0-distro/ubuntu_18.04 0-nvme-loop 1-start 2-services/mirror 3-final} 2
Failure Reason:

Command failed on smithi003 with status 5: 'sudo systemctl stop ceph-f3c888e6-52bf-11ed-8438-001a4aab830c@mon.smithi003'

pass 7077819 2022-10-23 03:39:13 2022-10-23 10:36:30 2022-10-23 11:37:59 1:01:29 0:51:44 0:09:45 smithi main ubuntu 20.04 rados/monthrash/{ceph clusters/3-mons mon_election/connectivity msgr-failures/mon-delay msgr/async-v2only objectstore/bluestore-bitmap rados supported-random-distro$/{ubuntu_latest} thrashers/many workloads/rados_mon_osdmap_prune} 2
fail 7077820 2022-10-23 03:39:14 2022-10-23 10:36:41 2022-10-23 11:04:34 0:27:53 0:19:59 0:07:54 smithi main centos 8.stream rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.stream_container_tools 1-bootstrap/octopus 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
Failure Reason:

Command failed on smithi084 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image docker.io/ceph/ceph:v15 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 5acf5790-52c0-11ed-8438-001a4aab830c -e sha1=d43ef73d3699233fe79a16a2a64561a856f3e0cd -- bash -c \'ceph versions | jq -e \'"\'"\'.overall | keys\'"\'"\' | grep $sha1\''

pass 7077821 2022-10-23 03:39:15 2022-10-23 10:37:41 2022-10-23 11:45:47 1:08:06 1:00:36 0:07:30 smithi main centos 8.stream rados/singleton/{all/ec-lost-unfound mon_election/connectivity msgr-failures/none msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{centos_8}} 1
pass 7077822 2022-10-23 03:39:16 2022-10-23 10:38:52 2022-10-23 10:53:16 0:14:24 0:07:58 0:06:26 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/connectivity task/test_cephadm_repos} 1
pass 7077823 2022-10-23 03:39:18 2022-10-23 10:38:52 2022-10-23 11:05:59 0:27:07 0:18:03 0:09:04 smithi main ubuntu 20.04 rados/singleton-nomsgr/{all/osd_stale_reads mon_election/classic rados supported-random-distro$/{ubuntu_latest}} 1
fail 7077824 2022-10-23 03:39:19 2022-10-23 10:38:53 2022-10-23 10:52:42 0:13:49 0:06:06 0:07:43 smithi main ubuntu 18.04 rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/mimic backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{ubuntu_18.04} mon_election/classic msgr-failures/few rados thrashers/pggrow thrashosds-health workloads/radosbench} 3
Failure Reason:

Command failed on smithi006 with status 5: 'sudo systemctl stop ceph-8d6bebd2-52c0-11ed-8438-001a4aab830c@mon.a'

pass 7077825 2022-10-23 03:39:20 2022-10-23 10:39:23 2022-10-23 16:02:39 5:23:16 4:40:36 0:42:40 smithi main centos 8.stream rados/objectstore/{backends/objectstore supported-random-distro$/{centos_8}} 1
fail 7077826 2022-10-23 03:39:21 2022-10-23 10:39:24 2022-10-23 10:55:58 0:16:34 0:07:52 0:08:42 smithi main rhel 8.4 rados/cephadm/smoke/{0-nvme-loop distro/rhel_8.4_container_tools_rhel8 fixed-2 mon_election/connectivity start} 2
Failure Reason:

Command failed on smithi012 with status 5: 'sudo systemctl stop ceph-fa852a8a-52c0-11ed-8438-001a4aab830c@mon.a'

pass 7077827 2022-10-23 03:39:23 2022-10-23 10:40:24 2022-10-23 11:19:24 0:39:00 0:33:22 0:05:38 smithi main centos 8.stream rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-comp-zlib rados tasks/mon_recovery validater/valgrind} 2
pass 7077828 2022-10-23 03:39:24 2022-10-23 11:18:56 1614 smithi main ubuntu 20.04 rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/connectivity msgr-failures/few objectstore/bluestore-comp-snappy rados recovery-overrides/{more-active-recovery} supported-random-distro$/{ubuntu_latest} thrashers/default thrashosds-health workloads/ec-rados-plugin=jerasure-k=4-m=2} 3
pass 7077829 2022-10-23 03:39:25 2022-10-23 10:41:15 2022-10-23 11:18:54 0:37:39 0:31:20 0:06:19 smithi main centos 8.stream rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/few objectstore/bluestore-comp-snappy rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{centos_8} thrashers/default thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} 2
fail 7077830 2022-10-23 03:39:26 2022-10-23 10:41:45 2022-10-23 10:57:35 0:15:50 0:05:12 0:10:38 smithi main ubuntu 20.04 rados/cephadm/smoke-roleless/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-services/nfs-ingress-rgw-bucket 3-final} 2
Failure Reason:

Command failed on smithi121 with status 5: 'sudo systemctl stop ceph-4b94825e-52c1-11ed-8438-001a4aab830c@mon.smithi121'

pass 7077831 2022-10-23 03:39:28 2022-10-23 10:42:16 2022-10-23 11:03:35 0:21:19 0:11:38 0:09:41 smithi main ubuntu 20.04 rados/perf/{ceph mon_election/classic objectstore/bluestore-comp openstack scheduler/wpq_default_shards settings/optimized ubuntu_latest workloads/fio_4K_rand_rw} 1
pass 7077832 2022-10-23 03:39:29 2022-10-23 10:42:46 2022-10-23 11:16:14 0:33:28 0:25:57 0:07:31 smithi main centos 8.stream rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} 3-scrub-overrides/{default} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/connectivity msgr-failures/osd-delay msgr/async-v1only objectstore/bluestore-hybrid rados supported-random-distro$/{centos_8} thrashers/default thrashosds-health workloads/cache-pool-snaps} 2
pass 7077833 2022-10-23 03:39:30 2022-10-23 10:43:17 2022-10-23 11:00:14 0:16:57 0:07:53 0:09:04 smithi main ubuntu 20.04 rados/singleton/{all/erasure-code-nonregression mon_election/classic msgr-failures/few msgr/async objectstore/filestore-xfs rados supported-random-distro$/{ubuntu_latest}} 1
fail 7077834 2022-10-23 03:39:31 2022-10-23 10:43:17 2022-10-23 11:10:10 0:26:53 0:21:05 0:05:48 smithi main centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{pg-warn whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-inline/no 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

Command failed on smithi052 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image docker.io/ceph/ceph:v16.2.4 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 934105e6-52c1-11ed-8438-001a4aab830c -e sha1=d43ef73d3699233fe79a16a2a64561a856f3e0cd -- bash -c \'ceph versions | jq -e \'"\'"\'.overall | keys\'"\'"\' | grep $sha1\''

pass 7077835 2022-10-23 03:39:32 2022-10-23 10:43:17 2022-10-23 11:02:49 0:19:32 0:12:44 0:06:48 smithi main rhel 8.4 rados/mgr/{clusters/{2-node-mgr} debug/mgr mgr_ttl_cache/disable mon_election/connectivity objectstore/bluestore-bitmap supported-random-distro$/{rhel_8} tasks/workunits} 2
fail 7077836 2022-10-23 03:39:34 2022-10-23 10:43:48 2022-10-23 11:01:07 0:17:19 0:09:33 0:07:46 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/classic task/test_nfs} 1
Failure Reason:

Command failed on smithi156 with status 5: 'sudo systemctl stop ceph-b508071a-52c1-11ed-8438-001a4aab830c@mon.a'

pass 7077837 2022-10-23 03:39:35 2022-10-23 10:43:48 2022-10-23 11:02:03 0:18:15 0:11:40 0:06:35 smithi main centos 8.stream rados/singleton-nomsgr/{all/pool-access mon_election/connectivity rados supported-random-distro$/{centos_8}} 1
fail 7077838 2022-10-23 03:39:36 2022-10-23 10:43:49 2022-10-23 10:57:52 0:14:03 0:07:35 0:06:28 smithi main rhel 8.4 rados/cephadm/osds/{0-distro/rhel_8.4_container_tools_rhel8 0-nvme-loop 1-start 2-ops/rm-zap-flag} 2
Failure Reason:

Command failed on smithi178 with status 5: 'sudo systemctl stop ceph-7d43f672-52c1-11ed-8438-001a4aab830c@mon.smithi178'

fail 7077839 2022-10-23 03:39:37 2022-10-23 10:44:29 2022-10-23 10:58:49 0:14:20 0:06:59 0:07:21 smithi main centos 8.stream rados/cephadm/smoke-roleless/{0-distro/centos_8.stream_container_tools 0-nvme-loop 1-start 2-services/nfs-ingress-rgw-user 3-final} 2
Failure Reason:

Command failed on smithi005 with status 5: 'sudo systemctl stop ceph-946a8adc-52c1-11ed-8438-001a4aab830c@mon.smithi005'

fail 7077840 2022-10-23 03:39:38 2022-10-23 10:45:00 2022-10-23 11:04:12 0:19:12 0:09:49 0:09:23 smithi main centos 8.stream rados/cephadm/thrash/{0-distro/centos_8.stream_container_tools 1-start 2-thrash 3-tasks/small-objects fixed-2 msgr/async root} 2
Failure Reason:

Command failed on smithi160 with status 5: 'sudo systemctl stop ceph-2cdb4dce-52c2-11ed-8438-001a4aab830c@mon.a'

pass 7077841 2022-10-23 03:39:40 2022-10-23 10:46:20 2022-10-23 12:25:24 1:39:04 1:33:26 0:05:38 smithi main rhel 8.4 rados/singleton/{all/lost-unfound-delete mon_election/connectivity msgr-failures/many msgr/async-v1only objectstore/bluestore-bitmap rados supported-random-distro$/{rhel_8}} 1
pass 7077842 2022-10-23 03:39:41 2022-10-23 10:46:21 2022-10-23 11:08:43 0:22:22 0:15:10 0:07:12 smithi main rhel 8.4 rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/connectivity msgr-failures/osd-delay objectstore/bluestore-bitmap rados recovery-overrides/{more-async-recovery} supported-random-distro$/{rhel_8} thrashers/careful thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} 4
pass 7077843 2022-10-23 03:39:42 2022-10-23 10:47:51 2022-10-23 11:18:05 0:30:14 0:20:44 0:09:30 smithi main ubuntu 20.04 rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/many msgr/async-v2only objectstore/bluestore-comp-zlib rados supported-random-distro$/{ubuntu_latest} tasks/libcephsqlite} 2
fail 7077844 2022-10-23 03:39:43 2022-10-23 10:48:12 2022-10-23 11:05:48 0:17:36 0:10:50 0:06:46 smithi main rhel 8.4 rados/cephadm/with-work/{0-distro/rhel_8.4_container_tools_rhel8 fixed-2 mode/packaged mon_election/classic msgr/async start tasks/rados_api_tests} 2
Failure Reason:

Command failed on smithi032 with status 5: 'sudo systemctl stop ceph-7e438f8c-52c2-11ed-8438-001a4aab830c@mon.a'

pass 7077845 2022-10-23 03:39:45 2022-10-23 10:48:42 2022-10-23 11:22:55 0:34:13 0:23:56 0:10:17 smithi main ubuntu 20.04 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/classic msgr-failures/osd-dispatch-delay msgr/async-v2only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{ubuntu_latest} thrashers/mapgap thrashosds-health workloads/cache-snaps-balanced} 2
fail 7077846 2022-10-23 03:39:46 2022-10-23 10:49:13 2022-10-23 11:04:40 0:15:27 0:06:40 0:08:47 smithi main ubuntu 18.04 rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/nautilus-v1only backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{ubuntu_18.04} mon_election/connectivity msgr-failures/osd-delay rados thrashers/careful thrashosds-health workloads/rbd_cls} 3
Failure Reason:

Command failed on smithi057 with status 5: 'sudo systemctl stop ceph-4e16e584-52c2-11ed-8438-001a4aab830c@mon.a'

pass 7077847 2022-10-23 03:39:47 2022-10-23 10:50:54 2022-10-23 11:10:07 0:19:13 0:07:34 0:11:39 smithi main ubuntu 20.04 rados/multimon/{clusters/21 mon_election/classic msgr-failures/many msgr/async-v1only no_pools objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{ubuntu_latest} tasks/mon_clock_with_skews} 3
pass 7077848 2022-10-23 03:39:48 2022-10-23 10:52:44 2022-10-23 11:44:10 0:51:26 0:45:45 0:05:41 smithi main rhel 8.4 rados/singleton-nomsgr/{all/recovery-unfound-found mon_election/classic rados supported-random-distro$/{rhel_8}} 1
fail 7077849 2022-10-23 03:39:49 2022-10-23 10:53:25 2022-10-23 11:07:59 0:14:34 0:04:36 0:09:58 smithi main ubuntu 18.04 rados/cephadm/smoke/{0-nvme-loop distro/ubuntu_18.04 fixed-2 mon_election/classic start} 2
Failure Reason:

Command failed on smithi012 with status 5: 'sudo systemctl stop ceph-c19692ca-52c2-11ed-8438-001a4aab830c@mon.a'

pass 7077850 2022-10-23 03:39:51 2022-10-23 10:56:06 2022-10-23 11:31:02 0:34:56 0:23:36 0:11:20 smithi main ubuntu 20.04 rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/normal mon_election/connectivity msgr-failures/few objectstore/bluestore-comp-lz4 rados recovery-overrides/{more-active-recovery} supported-random-distro$/{ubuntu_latest} thrashers/careful thrashosds-health workloads/ec-small-objects-many-deletes} 2
fail 7077851 2022-10-23 03:39:52 2022-10-23 10:57:46 2022-10-23 11:12:38 0:14:52 0:09:01 0:05:51 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/connectivity task/test_orch_cli} 1
Failure Reason:

Command failed on smithi197 with status 5: 'sudo systemctl stop ceph-84f193b4-52c3-11ed-8438-001a4aab830c@mon.a'

pass 7077852 2022-10-23 03:39:53 2022-10-23 10:57:57 2022-10-23 12:01:16 1:03:19 0:55:55 0:07:24 smithi main rhel 8.4 rados/singleton/{all/lost-unfound mon_election/classic msgr-failures/none msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{rhel_8}} 1
fail 7077853 2022-10-23 03:39:54 2022-10-23 10:57:57 2022-10-23 11:13:36 0:15:39 0:07:43 0:07:56 smithi main rhel 8.4 rados/cephadm/smoke-roleless/{0-distro/rhel_8.4_container_tools_3.0 0-nvme-loop 1-start 2-services/nfs-ingress 3-final} 2
Failure Reason:

Command failed on smithi090 with status 5: 'sudo systemctl stop ceph-6af8bdfc-52c3-11ed-8438-001a4aab830c@mon.smithi090'

fail 7077854 2022-10-23 03:39:55 2022-10-23 10:57:57 2022-10-23 11:27:34 0:29:37 0:19:37 0:10:00 smithi main ubuntu 20.04 rados/cephadm/upgrade/{1-start-distro/1-start-ubuntu_20.04 2-repo_digest/repo_digest 3-upgrade/simple 4-wait 5-upgrade-ls mon_election/classic} 2
Failure Reason:

Command failed on smithi097 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image docker.io/ceph/ceph:v15.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 7482a982-52c3-11ed-8438-001a4aab830c -e sha1=d43ef73d3699233fe79a16a2a64561a856f3e0cd -- bash -c \'ceph versions | jq -e \'"\'"\'.overall | keys\'"\'"\' | grep $sha1\''

pass 7077855 2022-10-23 03:39:57 2022-10-23 10:58:08 2022-10-23 11:21:08 0:23:00 0:10:00 0:13:00 smithi main ubuntu 20.04 rados/perf/{ceph mon_election/connectivity objectstore/bluestore-low-osd-mem-target openstack scheduler/dmclock_1Shard_16Threads settings/optimized ubuntu_latest workloads/fio_4M_rand_read} 1
fail 7077856 2022-10-23 03:39:58 2022-10-23 10:58:58 2022-10-23 11:26:16 0:27:18 0:21:01 0:06:17 smithi main centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{pg-warn whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-inline/yes 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

Command failed on smithi050 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image docker.io/ceph/ceph:v16.2.4 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid c9236e36-52c3-11ed-8438-001a4aab830c -e sha1=d43ef73d3699233fe79a16a2a64561a856f3e0cd -- bash -c \'ceph versions | jq -e \'"\'"\'.overall | keys\'"\'"\' | grep $sha1\''

pass 7077857 2022-10-23 03:39:59 2022-10-23 10:59:19 2022-10-23 11:39:14 0:39:55 0:32:42 0:07:13 smithi main centos 8.stream rados/monthrash/{ceph clusters/9-mons mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-comp-lz4 rados supported-random-distro$/{centos_8} thrashers/one workloads/rados_mon_workunits} 2
pass 7077858 2022-10-23 03:40:00 2022-10-23 11:00:09 2022-10-23 11:30:54 0:30:45 0:24:32 0:06:13 smithi main rhel 8.4 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-partial-recovery} 3-scrub-overrides/{default} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/connectivity msgr-failures/fastclose msgr/async objectstore/bluestore-stupid rados supported-random-distro$/{rhel_8} thrashers/morepggrow thrashosds-health workloads/cache-snaps} 2
pass 7077859 2022-10-23 03:40:01 2022-10-23 11:00:20 2022-10-23 11:22:07 0:21:47 0:13:47 0:08:00 smithi main centos 8.stream rados/singleton-nomsgr/{all/version-number-sanity mon_election/connectivity rados supported-random-distro$/{centos_8}} 1
fail 7077860 2022-10-23 03:40:02 2022-10-23 11:01:10 2022-10-23 11:18:22 0:17:12 0:05:01 0:12:11 smithi main ubuntu 20.04 rados/cephadm/smoke/{0-nvme-loop distro/ubuntu_20.04 fixed-2 mon_election/connectivity start} 2
Failure Reason:

Command failed on smithi112 with status 5: 'sudo systemctl stop ceph-2a43290e-52c4-11ed-8438-001a4aab830c@mon.a'

pass 7077861 2022-10-23 03:40:04 2022-10-23 11:02:51 2022-10-23 11:19:47 0:16:56 0:07:59 0:08:57 smithi main ubuntu 20.04 rados/singleton/{all/max-pg-per-osd.from-mon mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{ubuntu_latest}} 1
fail 7077862 2022-10-23 03:40:05 2022-10-23 11:02:51 2022-10-23 11:17:03 0:14:12 0:06:25 0:07:47 smithi main ubuntu 18.04 rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/nautilus-v2only backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{ubuntu_18.04} mon_election/classic msgr-failures/fastclose rados thrashers/default thrashosds-health workloads/snaps-few-objects} 3
Failure Reason:

Command failed on smithi059 with status 5: 'sudo systemctl stop ceph-00809066-52c4-11ed-8438-001a4aab830c@mon.a'

fail 7077863 2022-10-23 03:40:06 2022-10-23 11:03:42 2022-10-23 11:21:55 0:18:13 0:10:06 0:08:07 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/classic task/test_orch_cli_mon} 5
Failure Reason:

Command failed on smithi057 with status 5: 'sudo systemctl stop ceph-ae18856c-52c4-11ed-8438-001a4aab830c@mon.a'

fail 7077864 2022-10-23 03:40:07 2022-10-23 11:04:42 2022-10-23 11:18:17 0:13:35 0:07:29 0:06:06 smithi main rhel 8.4 rados/cephadm/smoke-roleless/{0-distro/rhel_8.4_container_tools_rhel8 0-nvme-loop 1-start 2-services/nfs-ingress2 3-final} 2
Failure Reason:

Command failed on smithi160 with status 5: 'sudo systemctl stop ceph-531005a0-52c4-11ed-8438-001a4aab830c@mon.smithi160'

pass 7077865 2022-10-23 03:40:08 2022-10-23 11:04:43 2022-10-23 11:35:59 0:31:16 0:25:27 0:05:49 smithi main centos 8.stream rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-comp-zstd rados tasks/rados_api_tests validater/lockdep} 2
pass 7077866 2022-10-23 03:40:10 2022-10-23 11:04:43 2022-10-23 11:27:02 0:22:19 0:14:57 0:07:22 smithi main rhel 8.4 rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/classic msgr-failures/osd-delay objectstore/bluestore-comp-zlib rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{rhel_8} thrashers/fastread thrashosds-health workloads/ec-rados-plugin=lrc-k=4-m=2-l=3} 3
pass 7077867 2022-10-23 03:40:11 2022-10-23 11:06:04 2022-10-23 11:44:06 0:38:02 0:26:21 0:11:41 smithi main ubuntu 20.04 rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/osd-delay objectstore/bluestore-comp-zlib rados recovery-overrides/{default} supported-random-distro$/{ubuntu_latest} thrashers/mapgap thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} 2
pass 7077868 2022-10-23 03:40:12 2022-10-23 11:08:04 2022-10-23 11:39:50 0:31:46 0:25:23 0:06:23 smithi main centos 8.stream rados/thrash-erasure-code-overwrites/{bluestore-bitmap ceph clusters/{fixed-2 openstack} fast/fast mon_election/classic msgr-failures/few rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{centos_8} thrashers/morepggrow thrashosds-health workloads/ec-small-objects-fast-read-overwrites} 2
fail 7077869 2022-10-23 03:40:13 2022-10-23 11:08:45 2022-10-23 11:20:07 0:11:22 0:04:51 0:06:31 smithi main ubuntu 18.04 rados/cephadm/osds/{0-distro/ubuntu_18.04 0-nvme-loop 1-start 2-ops/rm-zap-wait} 2
Failure Reason:

Command failed on smithi003 with status 5: 'sudo systemctl stop ceph-7d337cc2-52c4-11ed-8438-001a4aab830c@mon.smithi003'

pass 7077870 2022-10-23 03:40:14 2022-10-23 11:08:45 2022-10-23 12:36:05 1:27:20 1:18:01 0:09:19 smithi main rhel 8.4 rados/dashboard/{centos_8.stream_container_tools clusters/{2-node-mgr} debug/mgr mon_election/classic random-objectstore$/{bluestore-comp-lz4} supported-random-distro$/{rhel_8} tasks/dashboard} 2
pass 7077871 2022-10-23 03:40:16 2022-10-23 11:09:56 2022-10-23 11:26:56 0:17:00 0:10:52 0:06:08 smithi main centos 8.stream rados/objectstore/{backends/alloc-hint supported-random-distro$/{centos_8}} 1
pass 7077872 2022-10-23 03:40:17 2022-10-23 11:09:56 2022-10-23 11:33:25 0:23:29 0:14:10 0:09:19 smithi main ubuntu 20.04 rados/rest/{mgr-restful supported-random-distro$/{ubuntu_latest}} 1
pass 7077873 2022-10-23 03:40:18 2022-10-23 11:10:16 2022-10-23 11:36:45 0:26:29 0:20:29 0:06:00 smithi main centos 8.stream rados/singleton-nomsgr/{all/admin_socket_output mon_election/classic rados supported-random-distro$/{centos_8}} 1
pass 7077874 2022-10-23 03:40:19 2022-10-23 11:10:17 2022-10-23 11:31:23 0:21:06 0:15:24 0:05:42 smithi main rhel 8.4 rados/standalone/{supported-random-distro$/{rhel_8} workloads/crush} 1
pass 7077875 2022-10-23 03:40:21 2022-10-23 11:10:17 2022-10-23 14:21:54 3:11:37 3:00:31 0:11:06 smithi main ubuntu 18.04 rados/upgrade/nautilus-x-singleton/{0-cluster/{openstack start} 1-install/nautilus 2-partial-upgrade/firsthalf 3-thrash/default 4-workload/{rbd-cls rbd-import-export readwrite snaps-few-objects} 5-workload/{radosbench rbd_api} 6-finish-upgrade 7-pacific 8-workload/{rbd-python snaps-many-objects} bluestore-bitmap mon_election/classic thrashosds-health ubuntu_18.04} 4
pass 7077876 2022-10-23 03:40:22 2022-10-23 11:13:38 2022-10-23 11:42:15 0:28:37 0:23:27 0:05:10 smithi main centos 8.stream rados/valgrind-leaks/{1-start 2-inject-leak/mon centos_latest} 1
pass 7077877 2022-10-23 03:40:23 2022-10-23 11:13:39 2022-10-23 11:37:24 0:23:45 0:16:38 0:07:07 smithi main rhel 8.4 rados/singleton/{all/max-pg-per-osd.from-primary mon_election/classic msgr-failures/many msgr/async-v1only objectstore/bluestore-comp-zlib rados supported-random-distro$/{rhel_8}} 1
fail 7077878 2022-10-23 03:40:24 2022-10-23 11:14:09 2022-10-23 11:33:41 0:19:32 0:09:35 0:09:57 smithi main centos 8.stream rados/cephadm/thrash/{0-distro/centos_8.stream_container_tools 1-start 2-thrash 3-tasks/snaps-few-objects fixed-2 msgr/async-v1only root} 2
Failure Reason:

Command failed on smithi047 with status 5: 'sudo systemctl stop ceph-427aec58-52c6-11ed-8438-001a4aab830c@mon.a'

pass 7077879 2022-10-23 03:40:25 2022-10-23 11:16:20 2022-10-23 11:52:06 0:35:46 0:28:06 0:07:40 smithi main rhel 8.4 rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-comp-zstd rados supported-random-distro$/{rhel_8} tasks/rados_api_tests} 2
pass 7077880 2022-10-23 03:40:27 2022-10-23 11:17:10 2022-10-23 11:37:51 0:20:41 0:13:59 0:06:42 smithi main centos 8.stream rados/mgr/{clusters/{2-node-mgr} debug/mgr mgr_ttl_cache/enable mon_election/classic objectstore/bluestore-comp-lz4 supported-random-distro$/{centos_8} tasks/crash} 2
pass 7077881 2022-10-23 03:40:28 2022-10-23 11:17:11 2022-10-23 11:37:58 0:20:47 0:10:35 0:10:12 smithi main ubuntu 20.04 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-recovery} 3-scrub-overrides/{default} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/classic msgr-failures/few msgr/async-v1only objectstore/filestore-xfs rados supported-random-distro$/{ubuntu_latest} thrashers/none thrashosds-health workloads/cache} 2
fail 7077882 2022-10-23 03:40:29 2022-10-23 11:18:11 2022-10-23 11:32:18 0:14:07 0:06:40 0:07:27 smithi main ubuntu 18.04 rados/cephadm/with-work/{0-distro/ubuntu_18.04 fixed-2 mode/root mon_election/connectivity msgr/async-v1only start tasks/rados_python} 2
Failure Reason:

Command failed on smithi160 with status 5: 'sudo systemctl stop ceph-2ae8dc62-52c6-11ed-8438-001a4aab830c@mon.a'

fail 7077883 2022-10-23 03:40:30 2022-10-23 11:18:22 2022-10-23 11:29:59 0:11:37 0:04:50 0:06:47 smithi main ubuntu 18.04 rados/cephadm/smoke-roleless/{0-distro/ubuntu_18.04 0-nvme-loop 1-start 2-services/nfs 3-final} 2
Failure Reason:

Command failed on smithi112 with status 5: 'sudo systemctl stop ceph-dd46ea12-52c5-11ed-8438-001a4aab830c@mon.smithi112'

fail 7077884 2022-10-23 03:40:31 2022-10-23 11:18:32 2022-10-23 11:36:18 0:17:46 0:09:46 0:08:00 smithi main centos 8.stream rados/cephadm/dashboard/{0-distro/centos_8.stream_container_tools task/test_e2e} 2
Failure Reason:

Command failed on smithi188 with status 5: 'sudo systemctl stop ceph-a5c7c3f8-52c6-11ed-8438-001a4aab830c@mon.a'

pass 7077885 2022-10-23 03:40:33 2022-10-23 11:19:03 2022-10-23 11:40:52 0:21:49 0:14:04 0:07:45 smithi main centos 8.stream rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/classic msgr-failures/osd-dispatch-delay objectstore/bluestore-comp-lz4 rados recovery-overrides/{more-async-recovery} supported-random-distro$/{centos_8} thrashers/default thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} 4
fail 7077886 2022-10-23 03:40:34 2022-10-23 11:19:33 2022-10-23 11:48:47 0:29:14 0:21:22 0:07:52 smithi main centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{pg-warn whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-inline/no 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

Command failed on smithi043 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image docker.io/ceph/ceph:v16.2.4 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid ab2fe924-52c6-11ed-8438-001a4aab830c -e sha1=d43ef73d3699233fe79a16a2a64561a856f3e0cd -- bash -c \'ceph versions | jq -e \'"\'"\'.overall | keys\'"\'"\' | grep $sha1\''

pass 7077887 2022-10-23 03:40:35 2022-10-23 11:19:54 2022-10-23 11:42:52 0:22:58 0:15:49 0:07:09 smithi main rhel 8.4 rados/singleton/{all/max-pg-per-osd.from-replica mon_election/connectivity msgr-failures/none msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{rhel_8}} 1
pass 7077888 2022-10-23 03:40:36 2022-10-23 11:20:14 2022-10-23 11:38:59 0:18:45 0:09:07 0:09:38 smithi main ubuntu 20.04 rados/singleton-nomsgr/{all/balancer mon_election/connectivity rados supported-random-distro$/{ubuntu_latest}} 1
fail 7077889 2022-10-23 03:40:37 2022-10-23 11:20:14 2022-10-23 11:49:18 0:29:04 0:21:14 0:07:50 smithi main centos 8.stream rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.stream_container_tools 1-bootstrap/16.2.4 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
Failure Reason:

Command failed on smithi111 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image docker.io/ceph/ceph:v16.2.4 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 9c69d9fe-52c6-11ed-8438-001a4aab830c -e sha1=d43ef73d3699233fe79a16a2a64561a856f3e0cd -- bash -c \'ceph versions | jq -e \'"\'"\'.overall | keys\'"\'"\' | grep $sha1\''

pass 7077890 2022-10-23 03:40:39 2022-10-23 11:22:05 2022-10-23 11:41:24 0:19:19 0:09:52 0:09:27 smithi main ubuntu 20.04 rados/perf/{ceph mon_election/classic objectstore/bluestore-stupid openstack scheduler/dmclock_default_shards settings/optimized ubuntu_latest workloads/fio_4M_rand_rw} 1
pass 7077891 2022-10-23 03:40:40 2022-10-23 11:22:05 2022-10-23 11:46:58 0:24:53 0:17:48 0:07:05 smithi main rhel 8.4 rados/cephadm/orchestrator_cli/{0-random-distro$/{rhel_8.4_container_tools_3.0} 2-node-mgr orchestrator_cli} 2
fail 7077892 2022-10-23 03:40:41 2022-10-23 11:22:06 2022-10-23 11:40:35 0:18:29 0:07:21 0:11:08 smithi main ubuntu 20.04 rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 1-rook 2-workload/radosbench 3-final cluster/3-node k8s/1.21 net/calico rook/master} 3
Failure Reason:

[Errno 2] Cannot find file on the remote 'ubuntu@smithi081.front.sepia.ceph.com': 'rook/cluster/examples/kubernetes/ceph/operator.yaml'

pass 7077893 2022-10-23 03:40:42 2022-10-23 11:22:56 2022-10-23 12:03:10 0:40:14 0:30:56 0:09:18 smithi main centos 8.stream rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/connectivity msgr-failures/osd-delay msgr/async-v2only objectstore/bluestore-bitmap rados supported-random-distro$/{centos_8} thrashers/pggrow thrashosds-health workloads/pool-snaps-few-objects} 2
pass 7077894 2022-10-23 03:40:44 2022-10-23 11:26:17 2022-10-23 11:48:59 0:22:42 0:13:47 0:08:55 smithi main rhel 8.4 rados/multimon/{clusters/3 mon_election/connectivity msgr-failures/few msgr/async-v2only no_pools objectstore/bluestore-stupid rados supported-random-distro$/{rhel_8} tasks/mon_recovery} 2
pass 7077895 2022-10-23 03:40:45 2022-10-23 11:26:58 2022-10-23 12:04:16 0:37:18 0:31:01 0:06:17 smithi main centos 8.stream rados/cephadm/rbd_iscsi/{base/install cluster/{fixed-3 openstack} pool/datapool supported-random-distro$/{centos_8} workloads/ceph_iscsi} 3
pass 7077896 2022-10-23 03:40:46 2022-10-23 11:27:08 2022-10-23 12:00:54 0:33:46 0:22:50 0:10:56 smithi main ubuntu 20.04 rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/fast mon_election/classic msgr-failures/osd-delay objectstore/bluestore-comp-snappy rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{ubuntu_latest} thrashers/default thrashosds-health workloads/ec-small-objects} 2
fail 7077897 2022-10-23 03:40:48 2022-10-23 11:27:39 2022-10-23 11:43:12 0:15:33 0:06:38 0:08:55 smithi main centos 8.stream rados/cephadm/smoke/{0-nvme-loop distro/centos_8.stream_container_tools fixed-2 mon_election/classic start} 2
Failure Reason:

Command failed on smithi112 with status 5: 'sudo systemctl stop ceph-ba8f0200-52c7-11ed-8438-001a4aab830c@mon.a'

pass 7077898 2022-10-23 03:40:49 2022-10-23 11:30:10 2022-10-23 11:49:16 0:19:06 0:13:08 0:05:58 smithi main rhel 8.4 rados/singleton/{all/mon-auth-caps mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-hybrid rados supported-random-distro$/{rhel_8}} 1
fail 7077899 2022-10-23 03:40:50 2022-10-23 11:30:10 2022-10-23 11:43:35 0:13:25 0:07:29 0:05:56 smithi main rhel 8.4 rados/cephadm/smoke-singlehost/{0-distro$/{rhel_8.4_container_tools_rhel8} 1-start 2-services/basic 3-final} 1
Failure Reason:

Command failed on smithi018 with status 5: 'sudo systemctl stop ceph-dd85a372-52c7-11ed-8438-001a4aab830c@mon.smithi018'

pass 7077900 2022-10-23 03:40:51 2022-10-23 11:31:00 2022-10-23 11:53:25 0:22:25 0:16:54 0:05:31 smithi main rhel 8.4 rados/singleton-nomsgr/{all/cache-fs-trunc mon_election/classic rados supported-random-distro$/{rhel_8}} 1
pass 7077901 2022-10-23 03:40:53 2022-10-23 11:31:01 2022-10-23 11:48:06 0:17:05 0:11:18 0:05:47 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/classic task/test_adoption} 1
fail 7077902 2022-10-23 03:40:54 2022-10-23 11:31:11 2022-10-23 11:46:21 0:15:10 0:06:26 0:08:44 smithi main ubuntu 18.04 rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/nautilus backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{ubuntu_18.04} mon_election/connectivity msgr-failures/few rados thrashers/mapgap thrashosds-health workloads/test_rbd_api} 3
Failure Reason:

Command failed on smithi160 with status 5: 'sudo systemctl stop ceph-186b654e-52c8-11ed-8438-001a4aab830c@mon.a'

pass 7077903 2022-10-23 03:40:55 2022-10-23 11:32:22 2022-10-23 12:10:38 0:38:16 0:31:04 0:07:12 smithi main rhel 8.4 rados/monthrash/{ceph clusters/3-mons mon_election/connectivity msgr-failures/mon-delay msgr/async-v1only objectstore/bluestore-comp-snappy rados supported-random-distro$/{rhel_8} thrashers/sync-many workloads/snaps-few-objects} 2
fail 7077904 2022-10-23 03:40:56 2022-10-23 11:33:32 2022-10-23 11:49:00 0:15:28 0:05:11 0:10:17 smithi main ubuntu 20.04 rados/cephadm/smoke-roleless/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-services/nfs2 3-final} 2
Failure Reason:

Command failed on smithi047 with status 5: 'sudo systemctl stop ceph-7b801c7e-52c8-11ed-8438-001a4aab830c@mon.smithi047'

pass 7077905 2022-10-23 03:40:58 2022-10-23 11:33:43 2022-10-23 12:25:21 0:51:38 0:41:51 0:09:47 smithi main centos 8.stream rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/classic msgr-failures/osd-dispatch-delay msgr/async objectstore/bluestore-comp-lz4 rados supported-random-distro$/{centos_8} thrashers/careful thrashosds-health workloads/rados_api_tests} 2
pass 7077906 2022-10-23 03:40:59 2022-10-23 11:36:03 2022-10-23 11:57:26 0:21:23 0:13:44 0:07:39 smithi main centos 8.stream rados/singleton/{all/mon-config-key-caps mon_election/connectivity msgr-failures/many msgr/async-v1only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{centos_8}} 1
pass 7077907 2022-10-23 03:41:00 2022-10-23 11:36:24 2022-10-23 12:01:48 0:25:24 0:17:56 0:07:28 smithi main rhel 8.4 rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/many msgr/async-v1only objectstore/bluestore-hybrid rados supported-random-distro$/{rhel_8} tasks/rados_cls_all} 2
fail 7077908 2022-10-23 03:41:01 2022-10-23 11:36:54 2022-10-23 11:54:39 0:17:45 0:10:16 0:07:29 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/connectivity task/test_cephadm} 1
Failure Reason:

Command failed (workunit test cephadm/test_cephadm.sh) on smithi150 with status 125: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=031d56cfae658907a3f24cb5740764fd798d7d2c TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_cephadm.sh'

pass 7077909 2022-10-23 03:41:03 2022-10-23 11:37:25 2022-10-23 12:20:33 0:43:08 0:35:58 0:07:10 smithi main rhel 8.4 rados/singleton-bluestore/{all/cephtool mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-bitmap rados supported-random-distro$/{rhel_8}} 1
fail 7077910 2022-10-23 03:41:04 2022-10-23 11:37:55 2022-10-23 11:53:27 0:15:32 0:05:13 0:10:19 smithi main ubuntu 20.04 rados/cephadm/osds/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-ops/rmdir-reactivate} 2
Failure Reason:

Command failed on smithi046 with status 5: 'sudo systemctl stop ceph-1a6a6f1a-52c9-11ed-8438-001a4aab830c@mon.smithi046'

pass 7077911 2022-10-23 03:41:05 2022-10-23 11:38:06 2022-10-23 11:54:57 0:16:51 0:11:14 0:05:37 smithi main centos 8.stream rados/singleton-nomsgr/{all/ceph-kvstore-tool mon_election/connectivity rados supported-random-distro$/{centos_8}} 1
pass 7077912 2022-10-23 03:41:06 2022-10-23 11:38:06 2022-10-23 13:59:51 2:21:45 2:15:33 0:06:12 smithi main centos 8.stream rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-hybrid rados tasks/rados_cls_all validater/valgrind} 2