Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
pass 7515636 2024-01-12 21:54:32 2024-01-13 07:06:31 2024-01-13 07:50:57 0:44:26 0:34:14 0:10:12 smithi main centos 8.stream rados/cephadm/with-work/{0-distro/centos_8.stream_container_tools fixed-2 mode/packaged mon_election/classic msgr/async-v2only start tasks/rados_api_tests} 2
pass 7515637 2024-01-12 21:54:32 2024-01-13 07:07:11 2024-01-13 07:40:12 0:33:01 0:26:51 0:06:10 smithi main rhel 8.6 rados/singleton-nomsgr/{all/osd_stale_reads mon_election/classic rados supported-random-distro$/{rhel_8}} 1
pass 7515638 2024-01-12 21:54:33 2024-01-13 07:07:11 2024-01-13 09:10:40 2:03:29 1:54:39 0:08:50 smithi main ubuntu 18.04 rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/mimic backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{ubuntu_18.04} mon_election/connectivity msgr-failures/osd-delay rados thrashers/careful thrashosds-health workloads/radosbench} 3
fail 7515639 2024-01-12 21:54:34 2024-01-13 07:07:12 2024-01-13 07:26:53 0:19:41 0:11:50 0:07:51 smithi main rhel 8.6 rados/cephadm/smoke/{0-nvme-loop distro/rhel_8.6_container_tools_rhel8 fixed-2 mon_election/connectivity start} 2
Failure Reason:

Command failed on smithi067 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:445562ab4bc3ddfb386936119050695810860bcb shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 7e5e66fe-b1e4-11ee-95ac-87774f69a715 -- ceph orch daemon add osd smithi067:/dev/nvme4n1'

fail 7515640 2024-01-12 21:54:35 2024-01-13 07:07:52 2024-01-13 13:26:18 6:18:26 5:28:07 0:50:19 smithi main centos 8.stream rados/objectstore/{backends/objectstore supported-random-distro$/{centos_8}} 1
Failure Reason:

Command failed on smithi061 with status 1: 'sudo TESTDIR=/home/ubuntu/cephtest bash -c \'mkdir $TESTDIR/archive/ostest && cd $TESTDIR/archive/ostest && ulimit -Sn 16384 && CEPH_ARGS="--no-log-to-stderr --log-file $TESTDIR/archive/ceph_test_objectstore.log --debug-filestore 20 --debug-bluestore 20" ceph_test_objectstore --gtest_filter=-*/3 --gtest_catch_exceptions=0\''

pass 7515641 2024-01-12 21:54:36 2024-01-13 07:08:43 2024-01-13 07:41:05 0:32:22 0:24:11 0:08:11 smithi main rhel 8.6 rados/multimon/{clusters/9 mon_election/classic msgr-failures/few msgr/async-v2only no_pools objectstore/filestore-xfs rados supported-random-distro$/{rhel_8} tasks/mon_recovery} 3
pass 7515642 2024-01-12 21:54:37 2024-01-13 07:09:43 2024-01-13 07:34:31 0:24:48 0:14:30 0:10:18 smithi main ubuntu 20.04 rados/perf/{ceph mon_election/classic objectstore/bluestore-comp openstack scheduler/dmclock_1Shard_16Threads settings/optimized ubuntu_latest workloads/fio_4M_rand_rw} 1
pass 7515643 2024-01-12 21:54:37 2024-01-13 07:10:44 2024-01-13 07:35:08 0:24:24 0:15:15 0:09:09 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/classic task/test_iscsi_container/{centos_8.stream_container_tools test_iscsi_container}} 1
pass 7515644 2024-01-12 21:54:38 2024-01-13 07:10:54 2024-01-13 07:44:04 0:33:10 0:26:48 0:06:22 smithi main rhel 8.6 rados/singleton/{all/osd-recovery mon_election/connectivity msgr-failures/none msgr/async-v2only objectstore/bluestore-hybrid rados supported-random-distro$/{rhel_8}} 1
fail 7515645 2024-01-12 21:54:39 2024-01-13 07:10:55 2024-01-13 07:38:06 0:27:11 0:21:18 0:05:53 smithi main rhel 8.6 rados/cephadm/smoke-roleless/{0-distro/rhel_8.6_container_tools_3.0 0-nvme-loop 1-start 2-services/iscsi 3-final} 2
Failure Reason:

reached maximum tries (121) after waiting for 120 seconds

pass 7515646 2024-01-12 21:54:40 2024-01-13 07:11:05 2024-01-13 08:04:04 0:52:59 0:47:04 0:05:55 smithi main rhel 8.6 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{default} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/osd-delay msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{rhel_8} thrashers/default thrashosds-health workloads/radosbench} 2
pass 7515647 2024-01-12 21:54:41 2024-01-13 07:11:05 2024-01-13 07:48:07 0:37:02 0:27:11 0:09:51 smithi main centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-inline/no 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
pass 7515648 2024-01-12 21:54:42 2024-01-13 07:11:06 2024-01-13 07:51:06 0:40:00 0:30:53 0:09:07 smithi main ubuntu 20.04 rados/cephadm/upgrade/{1-start-distro/1-start-ubuntu_20.04 2-repo_digest/defaut 3-upgrade/simple 4-wait 5-upgrade-ls mon_election/connectivity} 2
pass 7515649 2024-01-12 21:54:42 2024-01-13 07:11:06 2024-01-13 07:31:54 0:20:48 0:11:13 0:09:35 smithi main centos 8.stream rados/singleton-nomsgr/{all/pool-access mon_election/connectivity rados supported-random-distro$/{centos_8}} 1
pass 7515650 2024-01-12 21:54:43 2024-01-13 07:11:06 2024-01-13 08:06:26 0:55:20 0:49:41 0:05:39 smithi main rhel 8.6 rados/monthrash/{ceph clusters/3-mons mon_election/classic msgr-failures/mon-delay msgr/async-v1only objectstore/bluestore-stupid rados supported-random-distro$/{rhel_8} thrashers/sync workloads/rados_mon_osdmap_prune} 2
pass 7515651 2024-01-12 21:54:44 2024-01-13 07:11:17 2024-01-13 07:48:44 0:37:27 0:27:18 0:10:09 smithi main centos 8.stream rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/many msgr/async-v1only objectstore/bluestore-comp-zlib rados supported-random-distro$/{centos_8} tasks/rados_api_tests} 2
pass 7515652 2024-01-12 21:54:45 2024-01-13 07:11:57 2024-01-13 07:57:06 0:45:09 0:35:40 0:09:29 smithi main centos 8.stream rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-stupid rados tasks/mon_recovery validater/valgrind} 2
pass 7515653 2024-01-12 21:54:46 2024-01-13 07:12:08 2024-01-13 07:47:25 0:35:17 0:26:28 0:08:49 smithi main centos 8.stream rados/thrash-erasure-code-overwrites/{bluestore-bitmap ceph clusters/{fixed-2 openstack} fast/normal mon_election/connectivity msgr-failures/few rados recovery-overrides/{more-active-recovery} supported-random-distro$/{centos_8} thrashers/minsize_recovery thrashosds-health workloads/ec-pool-snaps-few-objects-overwrites} 2
fail 7515654 2024-01-12 21:54:47 2024-01-13 07:12:28 2024-01-13 07:42:38 0:30:10 0:19:53 0:10:17 smithi main ubuntu 20.04 rados/cephadm/osds/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-ops/rm-zap-add} 2
Failure Reason:

reached maximum tries (121) after waiting for 120 seconds

pass 7515655 2024-01-12 21:54:47 2024-01-13 07:12:49 2024-01-13 07:36:31 0:23:42 0:17:23 0:06:19 smithi main rhel 8.6 rados/singleton/{all/peer mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{rhel_8}} 1
pass 7515656 2024-01-12 21:54:48 2024-01-13 07:12:49 2024-01-13 07:59:23 0:46:34 0:36:09 0:10:25 smithi main ubuntu 18.04 rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/nautilus-v1only backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{ubuntu_18.04} mon_election/classic msgr-failures/fastclose rados thrashers/default thrashosds-health workloads/rbd_cls} 3
pass 7515657 2024-01-12 21:54:49 2024-01-13 07:13:50 2024-01-13 07:47:31 0:33:41 0:27:18 0:06:23 smithi main rhel 8.6 rados/mgr/{clusters/{2-node-mgr} debug/mgr mgr_ttl_cache/disable mon_election/classic objectstore/bluestore-stupid supported-random-distro$/{rhel_8} tasks/progress} 2
pass 7515658 2024-01-12 21:54:50 2024-01-13 07:14:30 2024-01-13 07:37:17 0:22:47 0:11:54 0:10:53 smithi main ubuntu 20.04 rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/classic msgr-failures/few objectstore/bluestore-stupid rados recovery-overrides/{more-active-recovery} supported-random-distro$/{ubuntu_latest} thrashers/careful thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} 4
pass 7515659 2024-01-12 21:54:51 2024-01-13 07:15:31 2024-01-13 07:50:50 0:35:19 0:26:12 0:09:07 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/connectivity task/test_nfs} 1
pass 7515660 2024-01-12 21:54:52 2024-01-13 07:15:41 2024-01-13 07:58:53 0:43:12 0:37:17 0:05:55 smithi main rhel 8.6 rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/connectivity msgr-failures/osd-dispatch-delay objectstore/filestore-xfs rados recovery-overrides/{default} supported-random-distro$/{rhel_8} thrashers/morepggrow thrashosds-health workloads/ec-rados-plugin=jerasure-k=4-m=2} 3
pass 7515661 2024-01-12 21:54:52 2024-01-13 07:15:51 2024-01-13 07:50:59 0:35:08 0:28:05 0:07:03 smithi main rhel 8.6 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-partial-recovery} 3-scrub-overrides/{default} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/osd-dispatch-delay msgr/async objectstore/filestore-xfs rados supported-random-distro$/{rhel_8} thrashers/mapgap thrashosds-health workloads/redirect} 2
fail 7515662 2024-01-12 21:54:53 2024-01-13 07:16:02 2024-01-13 07:35:52 0:19:50 0:09:07 0:10:43 smithi main ubuntu 18.04 rados/cephadm/smoke/{0-nvme-loop distro/ubuntu_18.04 fixed-2 mon_election/classic start} 2
Failure Reason:

Command failed on smithi154 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:445562ab4bc3ddfb386936119050695810860bcb shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid a9aa8b34-b1e5-11ee-95ac-87774f69a715 -- ceph orch daemon add osd smithi154:/dev/nvme4n1'

fail 7515663 2024-01-12 21:54:54 2024-01-13 07:16:12 2024-01-13 07:40:42 0:24:30 0:17:14 0:07:16 smithi main rhel 8.6 rados/cephadm/smoke-roleless/{0-distro/rhel_8.6_container_tools_rhel8 0-nvme-loop 1-start 2-services/mirror 3-final} 2
Failure Reason:

reached maximum tries (121) after waiting for 120 seconds

pass 7515664 2024-01-12 21:54:55 2024-01-13 07:17:13 2024-01-13 08:00:14 0:43:01 0:37:01 0:06:00 smithi main rhel 8.6 rados/singleton-nomsgr/{all/recovery-unfound-found mon_election/classic rados supported-random-distro$/{rhel_8}} 1
pass 7515665 2024-01-12 21:54:56 2024-01-13 07:17:13 2024-01-13 07:49:18 0:32:05 0:25:04 0:07:01 smithi main rhel 8.6 rados/singleton/{all/pg-autoscaler-progress-off mon_election/connectivity msgr-failures/many msgr/async-v1only objectstore/bluestore-stupid rados supported-random-distro$/{rhel_8}} 2
pass 7515666 2024-01-12 21:54:57 2024-01-13 07:18:24 2024-01-13 08:08:01 0:49:37 0:38:02 0:11:35 smithi main centos 8.stream rados/cephadm/thrash/{0-distro/centos_8.stream_container_tools 1-start 2-thrash 3-tasks/snaps-few-objects fixed-2 msgr/async root} 2
pass 7515667 2024-01-12 21:54:57 2024-01-13 07:18:54 2024-01-13 08:00:42 0:41:48 0:34:27 0:07:21 smithi main rhel 8.6 rados/cephadm/with-work/{0-distro/rhel_8.6_container_tools_3.0 fixed-2 mode/root mon_election/connectivity msgr/async start tasks/rados_python} 2
pass 7515668 2024-01-12 21:54:58 2024-01-13 07:19:24 2024-01-13 07:54:34 0:35:10 0:24:48 0:10:22 smithi main ubuntu 20.04 rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/fast mon_election/classic msgr-failures/osd-delay objectstore/bluestore-hybrid rados recovery-overrides/{default} supported-random-distro$/{ubuntu_latest} thrashers/pggrow thrashosds-health workloads/ec-rados-plugin=clay-k=4-m=2} 2
pass 7515669 2024-01-12 21:54:59 2024-01-13 07:20:25 2024-01-13 07:45:43 0:25:18 0:15:03 0:10:15 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/classic task/test_orch_cli} 1
pass 7515670 2024-01-12 21:55:00 2024-01-13 07:20:25 2024-01-13 07:47:09 0:26:44 0:16:47 0:09:57 smithi main centos 8.stream rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/fastclose msgr/async-v1only objectstore/bluestore-bitmap rados supported-random-distro$/{centos_8} thrashers/morepggrow thrashosds-health workloads/redirect_promote_tests} 2
fail 7515671 2024-01-12 21:55:01 2024-01-13 07:50:35 1112 smithi main ubuntu 18.04 rados/cephadm/smoke-roleless/{0-distro/ubuntu_18.04 0-nvme-loop 1-start 2-services/nfs-ingress-rgw-bucket 3-final} 2
Failure Reason:

reached maximum tries (121) after waiting for 120 seconds

pass 7515672 2024-01-12 21:55:02 2024-01-13 07:20:26 2024-01-13 07:43:00 0:22:34 0:11:52 0:10:42 smithi main ubuntu 20.04 rados/perf/{ceph mon_election/connectivity objectstore/bluestore-low-osd-mem-target openstack scheduler/dmclock_default_shards settings/optimized ubuntu_latest workloads/fio_4M_rand_write} 1
pass 7515673 2024-01-12 21:55:02 2024-01-13 07:20:26 2024-01-13 08:01:34 0:41:08 0:30:59 0:10:09 smithi main centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-inline/yes 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
pass 7515674 2024-01-12 21:55:03 2024-01-13 07:20:37 2024-01-13 07:47:57 0:27:20 0:20:36 0:06:44 smithi main rhel 8.6 rados/singleton-nomsgr/{all/version-number-sanity mon_election/connectivity rados supported-random-distro$/{rhel_8}} 1
pass 7515675 2024-01-12 21:55:04 2024-01-13 07:20:57 2024-01-13 07:46:30 0:25:33 0:15:55 0:09:38 smithi main ubuntu 20.04 rados/singleton/{all/pg-autoscaler mon_election/classic msgr-failures/none msgr/async-v2only objectstore/filestore-xfs rados supported-random-distro$/{ubuntu_latest}} 1
pass 7515676 2024-01-12 21:55:05 2024-01-13 07:20:58 2024-01-13 07:52:44 0:31:46 0:22:26 0:09:20 smithi main ubuntu 20.04 rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/osd-dispatch-delay objectstore/filestore-xfs rados recovery-overrides/{default} supported-random-distro$/{ubuntu_latest} thrashers/none thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} 2
pass 7515677 2024-01-12 21:55:06 2024-01-13 07:20:58 2024-01-13 08:24:00 1:03:02 0:51:28 0:11:34 smithi main ubuntu 18.04 rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/nautilus-v2only backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{ubuntu_18.04} mon_election/connectivity msgr-failures/few rados thrashers/mapgap thrashosds-health workloads/snaps-few-objects} 3
pass 7515678 2024-01-12 21:55:06 2024-01-13 07:21:49 2024-01-13 08:03:36 0:41:47 0:31:54 0:09:53 smithi main ubuntu 20.04 rados/singleton-bluestore/{all/cephtool mon_election/connectivity msgr-failures/none msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{ubuntu_latest}} 1
fail 7515679 2024-01-12 21:55:07 2024-01-13 07:21:49 2024-01-13 07:43:01 0:21:12 0:09:43 0:11:29 smithi main ubuntu 20.04 rados/cephadm/smoke/{0-nvme-loop distro/ubuntu_20.04 fixed-2 mon_election/connectivity start} 2
Failure Reason:

Command failed on smithi124 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:445562ab4bc3ddfb386936119050695810860bcb shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 9c475020-b1e6-11ee-95ac-87774f69a715 -- ceph orch daemon add osd smithi124:/dev/nvme4n1'

fail 7515680 2024-01-12 21:55:08 2024-01-13 07:23:10 2024-01-13 07:48:50 0:25:40 0:14:46 0:10:54 smithi main centos 8.stream rados/cephadm/osds/{0-distro/centos_8.stream_container_tools 0-nvme-loop 1-start 2-ops/rm-zap-flag} 2
Failure Reason:

reached maximum tries (121) after waiting for 120 seconds

pass 7515681 2024-01-12 21:55:09 2024-01-13 07:23:10 2024-01-13 07:51:28 0:28:18 0:20:34 0:07:44 smithi main rhel 8.6 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-recovery} 3-scrub-overrides/{default} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{rhel_8} thrashers/none thrashosds-health workloads/redirect_set_object} 2
fail 7515682 2024-01-12 21:55:10 2024-01-13 07:23:30 2024-01-13 07:52:37 0:29:07 0:17:51 0:11:16 smithi main centos 8.stream rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{centos_8} tasks/rados_cls_all} 2
Failure Reason:

"2024-01-13T07:49:14.207903+0000 mon.a (mon.0) 469 : cluster [WRN] Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)" in cluster log

pass 7515683 2024-01-12 21:55:11 2024-01-13 07:25:41 2024-01-13 08:01:02 0:35:21 0:24:42 0:10:39 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/connectivity task/test_orch_cli_mon} 5
pass 7515684 2024-01-12 21:55:11 2024-01-13 07:25:52 2024-01-13 08:06:13 0:40:21 0:32:43 0:07:38 smithi main rhel 8.6 rados/multimon/{clusters/21 mon_election/connectivity msgr-failures/many msgr/async no_pools objectstore/bluestore-bitmap rados supported-random-distro$/{rhel_8} tasks/mon_recovery} 3
pass 7515685 2024-01-12 21:55:12 2024-01-13 07:27:02 2024-01-13 07:49:10 0:22:08 0:10:44 0:11:24 smithi main centos 8.stream rados/singleton/{all/pg-removal-interruption mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-bitmap rados supported-random-distro$/{centos_8}} 1
fail 7515686 2024-01-12 21:55:13 2024-01-13 07:30:03 2024-01-13 08:02:00 0:31:57 0:20:36 0:11:21 smithi main ubuntu 20.04 rados/cephadm/smoke-roleless/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-services/nfs-ingress-rgw-user 3-final} 2
Failure Reason:

reached maximum tries (121) after waiting for 120 seconds

pass 7515687 2024-01-12 21:55:14 2024-01-13 07:32:04 2024-01-13 08:43:50 1:11:46 0:58:53 0:12:53 smithi main centos 8.stream rados/dashboard/{centos_8.stream_container_tools clusters/{2-node-mgr} debug/mgr mon_election/classic random-objectstore$/{bluestore-stupid} tasks/dashboard} 2
pass 7515688 2024-01-12 21:55:15 2024-01-13 07:34:04 2024-01-13 07:57:11 0:23:07 0:16:39 0:06:28 smithi main rhel 8.6 rados/objectstore/{backends/alloc-hint supported-random-distro$/{rhel_8}} 1
pass 7515689 2024-01-12 21:55:16 2024-01-13 07:34:05 2024-01-13 08:00:34 0:26:29 0:15:09 0:11:20 smithi main ubuntu 20.04 rados/rest/{mgr-restful supported-random-distro$/{ubuntu_latest}} 1
pass 7515690 2024-01-12 21:55:16 2024-01-13 07:34:35 2024-01-13 08:05:41 0:31:06 0:21:34 0:09:32 smithi main centos 8.stream rados/singleton-nomsgr/{all/admin_socket_output mon_election/classic rados supported-random-distro$/{centos_8}} 1
pass 7515691 2024-01-12 21:55:17 2024-01-13 07:35:15 2024-01-13 07:57:56 0:22:41 0:12:56 0:09:45 smithi main ubuntu 20.04 rados/standalone/{supported-random-distro$/{ubuntu_latest} workloads/crush} 1
pass 7515692 2024-01-12 21:55:18 2024-01-13 07:35:56 2024-01-13 10:43:14 3:07:18 2:56:33 0:10:45 smithi main ubuntu 18.04 rados/upgrade/nautilus-x-singleton/{0-cluster/{openstack start} 1-install/nautilus 2-partial-upgrade/firsthalf 3-thrash/default 4-workload/{rbd-cls rbd-import-export readwrite snaps-few-objects} 5-workload/{radosbench rbd_api} 6-finish-upgrade 7-pacific 8-workload/{rbd-python snaps-many-objects} bluestore-bitmap mon_election/classic thrashosds-health ubuntu_18.04} 4
pass 7515693 2024-01-12 21:55:19 2024-01-13 07:37:27 2024-01-13 08:10:21 0:32:54 0:23:24 0:09:30 smithi main centos 8.stream rados/valgrind-leaks/{1-start 2-inject-leak/mon centos_latest} 1
fail 7515694 2024-01-12 21:55:20 2024-01-13 07:37:27 2024-01-13 08:09:08 0:31:41 0:22:24 0:09:17 smithi main centos 8.stream rados/cephadm/dashboard/{0-distro/centos_8.stream_container_tools task/test_e2e} 2
Failure Reason:

Command failed (workunit test cephadm/test_dashboard_e2e.sh) on smithi007 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=445562ab4bc3ddfb386936119050695810860bcb TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_dashboard_e2e.sh'

pass 7515695 2024-01-12 21:55:21 2024-01-13 07:38:07 2024-01-13 08:18:10 0:40:03 0:26:47 0:13:16 smithi main centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-inline/no 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
pass 7515696 2024-01-12 21:55:21 2024-01-13 07:40:18 2024-01-13 08:20:14 0:39:56 0:30:18 0:09:38 smithi main centos 8.stream rados/monthrash/{ceph clusters/9-mons mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/filestore-xfs rados supported-random-distro$/{centos_8} thrashers/force-sync-many workloads/rados_mon_workunits} 2
pass 7515697 2024-01-12 21:55:22 2024-01-13 07:40:49 2024-01-13 08:16:33 0:35:44 0:25:11 0:10:33 smithi main centos 8.stream rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async-v1only objectstore/filestore-xfs rados tasks/rados_api_tests validater/lockdep} 2
pass 7515698 2024-01-12 21:55:23 2024-01-13 07:41:09 2024-01-13 08:19:39 0:38:30 0:27:00 0:11:30 smithi main centos 8.stream rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.stream_container_tools 1-bootstrap/16.2.4 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
pass 7515699 2024-01-12 21:55:24 2024-01-13 07:42:39 2024-01-13 08:09:15 0:26:36 0:15:09 0:11:27 smithi main ubuntu 20.04 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{default} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/osd-delay msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{ubuntu_latest} thrashers/pggrow thrashosds-health workloads/set-chunks-read} 2
pass 7515700 2024-01-12 21:55:25 2024-01-13 07:43:10 2024-01-13 08:05:09 0:21:59 0:11:59 0:10:00 smithi main ubuntu 18.04 rados/cephadm/orchestrator_cli/{0-random-distro$/{ubuntu_18.04} 2-node-mgr orchestrator_cli} 2
pass 7515701 2024-01-12 21:55:26 2024-01-13 07:43:10 2024-01-13 08:09:08 0:25:58 0:12:59 0:12:59 smithi main centos 8.stream rados/mgr/{clusters/{2-node-mgr} debug/mgr mgr_ttl_cache/enable mon_election/connectivity objectstore/filestore-xfs supported-random-distro$/{centos_8} tasks/prometheus} 2
pass 7515702 2024-01-12 21:55:26 2024-01-13 07:45:51 2024-01-13 08:14:40 0:28:49 0:20:08 0:08:41 smithi main rhel 8.6 rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/connectivity msgr-failures/osd-delay objectstore/filestore-xfs rados recovery-overrides/{default} supported-random-distro$/{rhel_8} thrashers/default thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} 4
pass 7515703 2024-01-12 21:55:27 2024-01-13 07:47:32 2024-01-13 08:19:32 0:32:00 0:21:37 0:10:23 smithi main ubuntu 20.04 rados/singleton/{all/radostool mon_election/classic msgr-failures/many msgr/async-v1only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{ubuntu_latest}} 1
pass 7515704 2024-01-12 21:55:28 2024-01-13 07:47:32 2024-01-13 08:31:13 0:43:41 0:35:49 0:07:52 smithi main rhel 8.6 rados/cephadm/rbd_iscsi/{base/install cluster/{fixed-3 openstack} pool/datapool supported-random-distro$/{rhel_8} workloads/ceph_iscsi} 3
pass 7515705 2024-01-12 21:55:29 2024-01-13 07:48:03 2024-01-13 08:08:22 0:20:19 0:09:54 0:10:25 smithi main ubuntu 20.04 rados/singleton-nomsgr/{all/balancer mon_election/connectivity rados supported-random-distro$/{ubuntu_latest}} 1
pass 7515706 2024-01-12 21:55:30 2024-01-13 07:48:13 2024-01-13 08:11:53 0:23:40 0:11:33 0:12:07 smithi main ubuntu 20.04 rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/classic msgr-failures/fastclose objectstore/bluestore-bitmap rados recovery-overrides/{default} supported-random-distro$/{ubuntu_latest} thrashers/morepggrow thrashosds-health workloads/ec-rados-plugin=lrc-k=4-m=2-l=3} 3
fail 7515707 2024-01-12 21:55:31 2024-01-13 07:48:53 2024-01-13 08:07:55 0:19:02 0:08:58 0:10:04 smithi main centos 8.stream rados/cephadm/smoke/{0-nvme-loop distro/centos_8.stream_container_tools fixed-2 mon_election/classic start} 2
Failure Reason:

Command failed on smithi032 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:445562ab4bc3ddfb386936119050695810860bcb shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 39c3cc7c-b1ea-11ee-95ac-87774f69a715 -- ceph orch daemon add osd smithi032:/dev/nvme4n1'

pass 7515708 2024-01-12 21:55:31 2024-01-13 07:48:54 2024-01-13 08:11:22 0:22:28 0:12:10 0:10:18 smithi main ubuntu 20.04 rados/perf/{ceph mon_election/classic objectstore/bluestore-stupid openstack scheduler/wpq_default_shards settings/optimized ubuntu_latest workloads/radosbench_4K_rand_read} 1
fail 7515709 2024-01-12 21:55:32 2024-01-13 07:49:14 2024-01-13 08:12:26 0:23:12 0:14:21 0:08:51 smithi main centos 8.stream rados/cephadm/smoke-roleless/{0-distro/centos_8.stream_container_tools 0-nvme-loop 1-start 2-services/nfs-ingress 3-final} 2
Failure Reason:

reached maximum tries (121) after waiting for 120 seconds

pass 7515710 2024-01-12 21:55:33 2024-01-13 07:49:25 2024-01-13 08:16:58 0:27:33 0:15:04 0:12:29 smithi main ubuntu 18.04 rados/cephadm/smoke-singlehost/{0-distro$/{ubuntu_18.04} 1-start 2-services/basic 3-final} 1
pass 7515711 2024-01-12 21:55:34 2024-01-13 07:50:45 2024-01-13 08:26:05 0:35:20 0:28:37 0:06:43 smithi main rhel 8.6 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-partial-recovery} 3-scrub-overrides/{default} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/osd-dispatch-delay msgr/async-v1only objectstore/bluestore-comp-zlib rados supported-random-distro$/{rhel_8} thrashers/careful thrashosds-health workloads/small-objects-balanced} 2
pass 7515712 2024-01-12 21:55:35 2024-01-13 07:50:56 2024-01-13 08:22:58 0:32:02 0:21:04 0:10:58 smithi main centos 8.stream rados/singleton/{all/random-eio mon_election/connectivity msgr-failures/none msgr/async-v2only objectstore/bluestore-comp-snappy rados supported-random-distro$/{centos_8}} 2
pass 7515713 2024-01-12 21:55:36 2024-01-13 07:51:06 2024-01-13 08:36:07 0:45:01 0:35:32 0:09:29 smithi main centos 8.stream rados/cephadm/thrash/{0-distro/centos_8.stream_container_tools 1-start 2-thrash 3-tasks/rados_api_tests fixed-2 msgr/async-v1only root} 2
pass 7515714 2024-01-12 21:55:36 2024-01-13 07:51:06 2024-01-13 08:32:17 0:41:11 0:33:55 0:07:16 smithi main rhel 8.6 rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/normal mon_election/connectivity msgr-failures/osd-dispatch-delay objectstore/bluestore-low-osd-mem-target rados recovery-overrides/{default} supported-random-distro$/{rhel_8} thrashers/careful thrashosds-health workloads/ec-rados-plugin=jerasure-k=2-m=1} 2
pass 7515715 2024-01-12 21:55:37 2024-01-13 07:51:07 2024-01-13 08:27:42 0:36:35 0:24:22 0:12:13 smithi main centos 8.stream rados/cephadm/upgrade/{1-start-distro/1-start-centos_8.stream_container-tools 2-repo_digest/defaut 3-upgrade/simple 4-wait 5-upgrade-ls mon_election/classic} 2
pass 7515716 2024-01-12 21:55:38 2024-01-13 07:51:37 2024-01-13 08:22:36 0:30:59 0:20:51 0:10:08 smithi main ubuntu 20.04 rados/thrash-erasure-code-overwrites/{bluestore-bitmap ceph clusters/{fixed-2 openstack} fast/fast mon_election/classic msgr-failures/osd-delay rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{ubuntu_latest} thrashers/morepggrow thrashosds-health workloads/ec-small-objects-fast-read-overwrites} 2
pass 7515717 2024-01-12 21:55:39 2024-01-13 07:52:48 2024-01-13 08:17:22 0:24:34 0:14:38 0:09:56 smithi main centos 8.stream rados/singleton-nomsgr/{all/cache-fs-trunc mon_election/classic rados supported-random-distro$/{centos_8}} 1
pass 7515718 2024-01-12 21:55:40 2024-01-13 07:52:48 2024-01-13 08:41:58 0:49:10 0:40:55 0:08:15 smithi main rhel 8.6 rados/cephadm/with-work/{0-distro/rhel_8.6_container_tools_rhel8 fixed-2 mode/packaged mon_election/classic msgr/async-v1only start tasks/rados_api_tests} 2
pass 7515719 2024-01-12 21:55:40 2024-01-13 07:54:39 2024-01-13 08:24:22 0:29:43 0:16:25 0:13:18 smithi main centos 8.stream rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/many msgr/async objectstore/bluestore-hybrid rados supported-random-distro$/{centos_8} tasks/rados_python} 2
pass 7515720 2024-01-12 21:55:41 2024-01-13 07:57:09 2024-01-13 08:18:02 0:20:53 0:11:45 0:09:08 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/classic task/test_adoption} 1
pass 7515721 2024-01-12 21:55:42 2024-01-13 07:57:10 2024-01-13 08:32:14 0:35:04 0:23:19 0:11:45 smithi main ubuntu 20.04 rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/fastclose objectstore/bluestore-bitmap rados recovery-overrides/{more-async-recovery} supported-random-distro$/{ubuntu_latest} thrashers/none thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} 2
pass 7515722 2024-01-12 21:55:43 2024-01-13 07:58:00 2024-01-13 08:44:27 0:46:27 0:33:57 0:12:30 smithi main ubuntu 18.04 rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/nautilus backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{ubuntu_18.04} mon_election/classic msgr-failures/osd-delay rados thrashers/morepggrow thrashosds-health workloads/test_rbd_api} 3
pass 7515723 2024-01-12 21:55:44 2024-01-13 07:59:01 2024-01-13 08:32:40 0:33:39 0:23:03 0:10:36 smithi main centos 8.stream rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/fastclose msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{centos_8} thrashers/default thrashosds-health workloads/small-objects-localized} 2
pass 7515724 2024-01-12 21:55:45 2024-01-13 07:59:31 2024-01-13 08:23:59 0:24:28 0:13:27 0:11:01 smithi main ubuntu 20.04 rados/singleton/{all/rebuild-mondb mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-comp-zlib rados supported-random-distro$/{ubuntu_latest}} 1
fail 7515725 2024-01-12 21:55:45 2024-01-13 07:59:31 2024-01-13 08:27:56 0:28:25 0:19:44 0:08:41 smithi main rhel 8.6 rados/cephadm/smoke-roleless/{0-distro/rhel_8.6_container_tools_3.0 0-nvme-loop 1-start 2-services/nfs-ingress2 3-final} 2
Failure Reason:

reached maximum tries (121) after waiting for 120 seconds

pass 7515726 2024-01-12 21:55:46 2024-01-13 08:00:42 2024-01-13 08:25:11 0:24:29 0:16:39 0:07:50 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/connectivity task/test_cephadm} 1
pass 7515727 2024-01-12 21:55:47 2024-01-13 08:00:42 2024-01-13 08:23:57 0:23:15 0:17:03 0:06:12 smithi main rhel 8.6 rados/singleton-nomsgr/{all/ceph-kvstore-tool mon_election/connectivity rados supported-random-distro$/{rhel_8}} 1
fail 7515728 2024-01-12 21:55:48 2024-01-13 08:00:43 2024-01-13 08:27:56 0:27:13 0:21:14 0:05:59 smithi main rhel 8.6 rados/cephadm/osds/{0-distro/rhel_8.6_container_tools_3.0 0-nvme-loop 1-start 2-ops/rm-zap-wait} 2
Failure Reason:

reached maximum tries (121) after waiting for 120 seconds

pass 7515729 2024-01-12 21:55:49 2024-01-13 08:01:03 2024-01-13 08:19:11 0:18:08 0:08:11 0:09:57 smithi main ubuntu 20.04 rados/multimon/{clusters/3 mon_election/classic msgr-failures/few msgr/async-v1only no_pools objectstore/bluestore-comp-lz4 rados supported-random-distro$/{ubuntu_latest} tasks/mon_clock_no_skews} 2
fail 7515730 2024-01-12 21:55:50 2024-01-13 08:01:04 2024-01-13 08:20:12 0:19:08 0:12:50 0:06:18 smithi main rhel 8.6 rados/cephadm/smoke/{0-nvme-loop distro/rhel_8.6_container_tools_3.0 fixed-2 mon_election/connectivity start} 2
Failure Reason:

Command failed on smithi110 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:445562ab4bc3ddfb386936119050695810860bcb shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid fce9f4b4-b1eb-11ee-95ac-87774f69a715 -- ceph orch daemon add osd smithi110:/dev/nvme4n1'

pass 7515731 2024-01-12 21:55:50 2024-01-13 08:01:04 2024-01-13 08:49:12 0:48:08 0:38:51 0:09:17 smithi main ubuntu 20.04 rados/singleton/{all/recovery-preemption mon_election/connectivity msgr-failures/many msgr/async-v1only objectstore/bluestore-comp-zstd rados supported-random-distro$/{ubuntu_latest}} 1
pass 7515732 2024-01-12 21:55:51 2024-01-13 08:01:04 2024-01-13 08:34:54 0:33:50 0:24:28 0:09:22 smithi main centos 8.stream rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-hybrid rados supported-random-distro$/{centos_8} thrashers/mapgap thrashosds-health workloads/small-objects} 2
pass 7515733 2024-01-12 21:55:52 2024-01-13 08:01:45 2024-01-13 08:49:01 0:47:16 0:35:20 0:11:56 smithi main ubuntu 18.04 rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/octopus backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{ubuntu_18.04} mon_election/connectivity msgr-failures/fastclose rados thrashers/none thrashosds-health workloads/cache-snaps} 3
pass 7515734 2024-01-12 21:55:53 2024-01-13 08:03:45 2024-01-13 08:25:54 0:22:09 0:11:08 0:11:01 smithi main ubuntu 20.04 rados/perf/{ceph mon_election/connectivity objectstore/bluestore-basic-min-osd-mem-target openstack scheduler/dmclock_1Shard_16Threads settings/optimized ubuntu_latest workloads/radosbench_4K_seq_read} 1
pass 7515735 2024-01-12 21:55:54 2024-01-13 08:04:06 2024-01-13 08:44:26 0:40:20 0:28:50 0:11:30 smithi main centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-inline/yes 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
pass 7515736 2024-01-12 21:55:55 2024-01-13 08:05:16 2024-01-13 08:51:39 0:46:23 0:39:15 0:07:08 smithi main rhel 8.6 rados/monthrash/{ceph clusters/3-mons mon_election/classic msgr-failures/mon-delay msgr/async objectstore/bluestore-bitmap rados supported-random-distro$/{rhel_8} thrashers/many workloads/rados_mon_workunits} 2
pass 7515737 2024-01-12 21:55:56 2024-01-13 08:05:47 2024-01-13 08:28:25 0:22:38 0:12:23 0:10:15 smithi main ubuntu 20.04 rados/objectstore/{backends/ceph_objectstore_tool supported-random-distro$/{ubuntu_latest}} 1
fail 7515738 2024-01-12 21:55:56 2024-01-13 08:06:17 2024-01-13 14:48:08 6:41:51 6:32:10 0:09:41 smithi main centos 8.stream rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-bitmap rados tasks/rados_api_tests validater/valgrind} 2
Failure Reason:

Command failed (workunit test rados/test.sh) on smithi067 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=445562ab4bc3ddfb386936119050695810860bcb TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 ALLOW_TIMEOUTS=1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test.sh'

fail 7515739 2024-01-12 21:55:57 2024-01-13 08:06:18 2024-01-13 08:29:51 0:23:33 0:17:09 0:06:24 smithi main rhel 8.6 rados/cephadm/smoke-roleless/{0-distro/rhel_8.6_container_tools_rhel8 0-nvme-loop 1-start 2-services/nfs 3-final} 2
Failure Reason:

reached maximum tries (121) after waiting for 120 seconds

pass 7515740 2024-01-12 21:55:58 2024-01-13 08:06:28 2024-01-13 08:24:20 0:17:52 0:08:11 0:09:41 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/classic task/test_cephadm_repos} 1
pass 7515741 2024-01-12 21:55:59 2024-01-13 08:07:59 2024-01-13 08:29:08 0:21:09 0:12:10 0:08:59 smithi main centos 8.stream rados/mgr/{clusters/{2-node-mgr} debug/mgr mgr_ttl_cache/disable mon_election/classic objectstore/bluestore-bitmap supported-random-distro$/{centos_8} tasks/workunits} 2
pass 7515742 2024-01-12 21:56:00 2024-01-13 08:08:09 2024-01-13 08:34:28 0:26:19 0:15:10 0:11:09 smithi main centos 8.stream rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/classic msgr-failures/osd-dispatch-delay objectstore/bluestore-bitmap rados recovery-overrides/{more-async-recovery} supported-random-distro$/{centos_8} thrashers/careful thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} 4
pass 7515743 2024-01-12 21:56:00 2024-01-13 08:09:10 2024-01-13 08:32:31 0:23:21 0:15:49 0:07:32 smithi main rhel 8.6 rados/singleton-nomsgr/{all/ceph-post-file mon_election/classic rados supported-random-distro$/{rhel_8}} 1
pass 7515744 2024-01-12 21:56:01 2024-01-13 08:09:10 2024-01-13 09:04:31 0:55:21 0:45:49 0:09:32 smithi main centos 8.stream rados/cephadm/thrash/{0-distro/centos_8.stream_container_tools 1-start 2-thrash 3-tasks/radosbench fixed-2 msgr/async-v2only root} 2
pass 7515745 2024-01-12 21:56:02 2024-01-13 08:09:20 2024-01-13 08:33:05 0:23:45 0:12:17 0:11:28 smithi main centos 8.stream rados/singleton/{all/resolve_stuck_peering mon_election/classic msgr-failures/none msgr/async-v2only objectstore/bluestore-hybrid rados supported-random-distro$/{centos_8}} 2
pass 7515746 2024-01-12 21:56:03 2024-01-13 08:09:41 2024-01-13 09:03:49 0:54:08 0:43:20 0:10:48 smithi main ubuntu 20.04 rados/standalone/{supported-random-distro$/{ubuntu_latest} workloads/erasure-code} 1
pass 7515747 2024-01-12 21:56:04 2024-01-13 08:09:41 2024-01-13 08:48:45 0:39:04 0:27:04 0:12:00 smithi main centos 8.stream rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-partial-recovery} 3-scrub-overrides/{default} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/osd-delay msgr/async-v1only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{centos_8} thrashers/morepggrow thrashosds-health workloads/snaps-few-objects-balanced} 2
pass 7515748 2024-01-12 21:56:05 2024-01-13 08:11:32 2024-01-13 08:51:18 0:39:46 0:29:32 0:10:14 smithi main ubuntu 18.04 rados/cephadm/with-work/{0-distro/ubuntu_18.04 fixed-2 mode/root mon_election/connectivity msgr/async-v2only start tasks/rados_python} 2
pass 7515749 2024-01-12 21:56:05 2024-01-13 08:12:02 2024-01-13 08:54:14 0:42:12 0:33:54 0:08:18 smithi main rhel 8.6 rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/connectivity msgr-failures/few objectstore/bluestore-comp-lz4 rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{rhel_8} thrashers/pggrow thrashosds-health workloads/ec-rados-plugin=jerasure-k=4-m=2} 3
fail 7515750 2024-01-12 21:56:06 2024-01-13 08:12:33 2024-01-13 08:33:42 0:21:09 0:11:55 0:09:14 smithi main rhel 8.6 rados/cephadm/smoke/{0-nvme-loop distro/rhel_8.6_container_tools_rhel8 fixed-2 mon_election/classic start} 2
Failure Reason:

Command failed on smithi031 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:445562ab4bc3ddfb386936119050695810860bcb shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid cfc1b646-b1ed-11ee-95ac-87774f69a715 -- ceph orch daemon add osd smithi031:/dev/nvme4n1'

dead 7515751 2024-01-12 21:56:07 2024-01-13 08:14:43 2024-01-13 20:22:54 12:08:11 smithi main ubuntu 18.04 rados/cephadm/smoke-roleless/{0-distro/ubuntu_18.04 0-nvme-loop 1-start 2-services/nfs2 3-final} 2
Failure Reason:

hit max job timeout

pass 7515752 2024-01-12 21:56:08 2024-01-13 08:14:44 2024-01-13 08:42:01 0:27:17 0:19:40 0:07:37 smithi main rhel 8.6 rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{rhel_8} tasks/rados_stress_watch} 2
pass 7515753 2024-01-12 21:56:09 2024-01-13 08:16:34 2024-01-13 08:44:53 0:28:19 0:15:55 0:12:24 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/connectivity task/test_iscsi_container/{centos_8.stream_container_tools test_iscsi_container}} 1
pass 7515754 2024-01-12 21:56:10 2024-01-13 08:17:05 2024-01-13 08:42:18 0:25:13 0:17:46 0:07:27 smithi main rhel 8.6 rados/singleton-nomsgr/{all/export-after-evict mon_election/connectivity rados supported-random-distro$/{rhel_8}} 1
pass 7515755 2024-01-12 21:56:10 2024-01-13 08:17:25 2024-01-13 08:39:04 0:21:39 0:12:28 0:09:11 smithi main centos 8.stream rados/singleton/{all/test-crash mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{centos_8}} 1
pass 7515756 2024-01-12 21:56:11 2024-01-13 08:18:06 2024-01-13 08:52:57 0:34:51 0:24:42 0:10:09 smithi main ubuntu 20.04 rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/fast mon_election/classic msgr-failures/fastclose objectstore/bluestore-stupid rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{ubuntu_latest} thrashers/default thrashosds-health workloads/ec-rados-plugin=jerasure-k=3-m=1} 2
pass 7515757 2024-01-12 21:56:12 2024-01-13 08:18:16 2024-01-13 09:57:09 1:38:53 1:28:53 0:10:00 smithi main ubuntu 18.04 rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/luminous-v1only backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{ubuntu_18.04} mon_election/classic msgr-failures/few rados thrashers/careful thrashosds-health workloads/radosbench} 3
pass 7515758 2024-01-12 21:56:13 2024-01-13 08:19:37 2024-01-13 08:53:56 0:34:19 0:24:45 0:09:34 smithi main ubuntu 20.04 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-active-recovery} 3-scrub-overrides/{default} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/osd-dispatch-delay msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{ubuntu_latest} thrashers/none thrashosds-health workloads/snaps-few-objects-localized} 2
fail 7515759 2024-01-12 21:56:14 2024-01-13 08:19:47 2024-01-13 08:43:46 0:23:59 0:16:57 0:07:02 smithi main rhel 8.6 rados/cephadm/osds/{0-distro/rhel_8.6_container_tools_rhel8 0-nvme-loop 1-start 2-ops/rmdir-reactivate} 2
Failure Reason:

reached maximum tries (121) after waiting for 120 seconds

pass 7515760 2024-01-12 21:56:15 2024-01-13 08:20:17 2024-01-13 08:59:19 0:39:02 0:29:17 0:09:45 smithi main centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-inline/no 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
pass 7515761 2024-01-12 21:56:15 2024-01-13 08:20:18 2024-01-13 09:24:31 1:04:13 0:51:39 0:12:34 smithi main ubuntu 20.04 rados/cephadm/upgrade/{1-start-distro/1-start-ubuntu_20.04-15.2.9 2-repo_digest/repo_digest 3-upgrade/staggered 4-wait 5-upgrade-ls mon_election/connectivity} 2
pass 7515762 2024-01-12 21:56:16 2024-01-13 08:22:38 2024-01-13 08:45:03 0:22:25 0:12:14 0:10:11 smithi main ubuntu 20.04 rados/perf/{ceph mon_election/classic objectstore/bluestore-bitmap openstack scheduler/dmclock_default_shards settings/optimized ubuntu_latest workloads/radosbench_4M_rand_read} 1
pass 7515763 2024-01-12 21:56:17 2024-01-13 08:22:59 2024-01-13 08:59:51 0:36:52 0:26:16 0:10:36 smithi main centos 8.stream rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/few objectstore/bluestore-comp-lz4 rados recovery-overrides/{more-async-recovery} supported-random-distro$/{centos_8} thrashers/pggrow thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} 2
fail 7515764 2024-01-12 21:56:18 2024-01-13 08:23:59 2024-01-13 08:57:59 0:34:00 0:20:53 0:13:07 smithi main ubuntu 20.04 rados/cephadm/smoke-roleless/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-services/rgw-ingress 3-final} 2
Failure Reason:

reached maximum tries (121) after waiting for 120 seconds

pass 7515765 2024-01-12 21:56:19 2024-01-13 08:24:10 2024-01-13 08:45:11 0:21:01 0:11:18 0:09:43 smithi main centos 8.stream rados/singleton/{all/test-noautoscale-flag mon_election/classic msgr-failures/many msgr/async-v1only objectstore/bluestore-stupid rados supported-random-distro$/{centos_8}} 1
pass 7515766 2024-01-12 21:56:20 2024-01-13 08:24:10 2024-01-13 08:45:36 0:21:26 0:11:21 0:10:05 smithi main centos 8.stream rados/singleton-nomsgr/{all/full-tiering mon_election/classic rados supported-random-distro$/{centos_8}} 1
pass 7515767 2024-01-12 21:56:20 2024-01-13 08:24:10 2024-01-13 08:58:51 0:34:41 0:25:45 0:08:56 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/classic task/test_nfs} 1
pass 7515768 2024-01-12 21:56:21 2024-01-13 08:24:21 2024-01-13 09:03:45 0:39:24 0:33:27 0:05:57 smithi main rhel 8.6 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-active-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/fastclose msgr/async objectstore/filestore-xfs rados supported-random-distro$/{rhel_8} thrashers/pggrow thrashosds-health workloads/snaps-few-objects} 2
fail 7515769 2024-01-12 21:56:22 2024-01-13 08:24:31 2024-01-13 08:43:31 0:19:00 0:09:01 0:09:59 smithi main ubuntu 18.04 rados/cephadm/smoke/{0-nvme-loop distro/ubuntu_18.04 fixed-2 mon_election/connectivity start} 2
Failure Reason:

Command failed on smithi028 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:445562ab4bc3ddfb386936119050695810860bcb shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 4c81b766-b1ef-11ee-95ac-87774f69a715 -- ceph orch daemon add osd smithi028:/dev/nvme4n1'

pass 7515770 2024-01-12 21:56:23 2024-01-13 08:26:02 2024-01-13 08:45:20 0:19:18 0:10:26 0:08:52 smithi main centos 8.stream rados/multimon/{clusters/6 mon_election/connectivity msgr-failures/many msgr/async-v2only no_pools objectstore/bluestore-comp-snappy rados supported-random-distro$/{centos_8} tasks/mon_clock_with_skews} 2
pass 7515771 2024-01-12 21:56:24 2024-01-13 08:26:12 2024-01-13 09:11:27 0:45:15 0:35:07 0:10:08 smithi main ubuntu 18.04 rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/luminous backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{ubuntu_18.04} mon_election/connectivity msgr-failures/osd-delay rados thrashers/default thrashosds-health workloads/rbd_cls} 3
pass 7515772 2024-01-12 21:56:25 2024-01-13 08:28:03 2024-01-13 09:04:10 0:36:07 0:28:37 0:07:30 smithi main rhel 8.6 rados/thrash-erasure-code-overwrites/{bluestore-bitmap ceph clusters/{fixed-2 openstack} fast/normal mon_election/connectivity msgr-failures/osd-dispatch-delay rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{rhel_8} thrashers/pggrow thrashosds-health workloads/ec-small-objects-overwrites} 2
pass 7515773 2024-01-12 21:56:25 2024-01-13 08:28:03 2024-01-13 09:03:43 0:35:40 0:25:04 0:10:36 smithi main centos 8.stream rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.stream_container_tools 1-bootstrap/16.2.5 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
pass 7515774 2024-01-12 21:56:26 2024-01-13 08:28:34 2024-01-13 09:04:24 0:35:50 0:28:53 0:06:57 smithi main rhel 8.6 rados/singleton/{all/test_envlibrados_for_rocksdb mon_election/connectivity msgr-failures/none msgr/async-v2only objectstore/filestore-xfs rados supported-random-distro$/{rhel_8}} 1
fail 7515775 2024-01-12 21:56:27 2024-01-13 08:29:14 2024-01-13 08:55:09 0:25:55 0:15:00 0:10:55 smithi main centos 8.stream rados/cephadm/smoke-roleless/{0-distro/centos_8.stream_container_tools 0-nvme-loop 1-start 2-services/rgw 3-final} 2
Failure Reason:

reached maximum tries (121) after waiting for 120 seconds

pass 7515776 2024-01-12 21:56:28 2024-01-13 08:29:55 2024-01-13 08:54:41 0:24:46 0:16:19 0:08:27 smithi main centos 8.stream rados/singleton-nomsgr/{all/health-warnings mon_election/connectivity rados supported-random-distro$/{centos_8}} 1
pass 7515777 2024-01-12 21:56:29 2024-01-13 08:29:55 2024-01-13 09:14:31 0:44:36 0:33:56 0:10:40 smithi main centos 8.stream rados/cephadm/thrash/{0-distro/centos_8.stream_container_tools 1-start 2-thrash 3-tasks/small-objects fixed-2 msgr/async root} 2
fail 7515778 2024-01-12 21:56:29 2024-01-13 08:31:16 2024-01-13 09:05:25 0:34:09 0:23:41 0:10:28 smithi main centos 8.stream rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-comp-lz4 rados tasks/rados_cls_all validater/lockdep} 2
Failure Reason:

"2024-01-13T09:01:58.523705+0000 mon.a (mon.0) 520 : cluster [WRN] Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)" in cluster log

pass 7515779 2024-01-12 21:56:30 2024-01-13 08:32:16 2024-01-13 09:23:41 0:51:25 0:45:01 0:06:24 smithi main rhel 8.6 rados/monthrash/{ceph clusters/9-mons mon_election/connectivity msgr-failures/few msgr/async-v1only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{rhel_8} thrashers/one workloads/snaps-few-objects} 2
pass 7515780 2024-01-12 21:56:31 2024-01-13 08:32:27 2024-01-13 08:58:41 0:26:14 0:16:49 0:09:25 smithi main ubuntu 20.04 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} 3-scrub-overrides/{default} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-bitmap rados supported-random-distro$/{ubuntu_latest} thrashers/careful thrashosds-health workloads/write_fadvise_dontneed} 2
pass 7515781 2024-01-12 21:56:32 2024-01-13 08:32:37 2024-01-13 08:54:52 0:22:15 0:11:58 0:10:17 smithi main ubuntu 20.04 rados/mgr/{clusters/{2-node-mgr} debug/mgr mgr_ttl_cache/enable mon_election/connectivity objectstore/bluestore-comp-lz4 supported-random-distro$/{ubuntu_latest} tasks/crash} 2
pass 7515782 2024-01-12 21:56:33 2024-01-13 09:01:24 1295 smithi main rhel 8.6 rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/connectivity msgr-failures/fastclose objectstore/bluestore-comp-lz4 rados recovery-overrides/{more-active-recovery} supported-random-distro$/{rhel_8} thrashers/default thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} 4
pass 7515783 2024-01-12 21:56:34 2024-01-13 08:33:48 2024-01-13 09:30:37 0:56:49 0:45:14 0:11:35 smithi main ubuntu 20.04 rados/cephadm/with-work/{0-distro/ubuntu_20.04 fixed-2 mode/packaged mon_election/classic msgr/async start tasks/rados_api_tests} 2
pass 7515784 2024-01-12 21:56:35 2024-01-13 08:34:38 2024-01-13 08:54:02 0:19:24 0:13:18 0:06:06 smithi main rhel 8.6 rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/many msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{rhel_8} tasks/rados_striper} 2
pass 7515785 2024-01-12 21:56:35 2024-01-13 08:34:39 2024-01-13 09:21:49 0:47:10 0:41:37 0:05:33 smithi main rhel 8.6 rados/singleton-bluestore/{all/cephtool mon_election/classic msgr-failures/none msgr/async-v1only objectstore/bluestore-bitmap rados supported-random-distro$/{rhel_8}} 1
pass 7515786 2024-01-12 21:56:36 2024-01-13 08:34:39 2024-01-13 09:01:52 0:27:13 0:17:00 0:10:13 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/connectivity task/test_orch_cli} 1
pass 7515787 2024-01-12 21:56:37 2024-01-13 08:34:40 2024-01-13 08:54:56 0:20:16 0:10:25 0:09:51 smithi main ubuntu 20.04 rados/objectstore/{backends/filejournal supported-random-distro$/{ubuntu_latest}} 1
pass 7515788 2024-01-12 21:56:38 2024-01-13 08:35:00 2024-01-13 10:28:07 1:53:07 1:42:39 0:10:28 smithi main ubuntu 20.04 rados/singleton/{all/thrash-backfill-full mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-bitmap rados supported-random-distro$/{ubuntu_latest}} 2
pass 7515789 2024-01-12 21:56:39 2024-01-13 08:35:10 2024-01-13 09:17:58 0:42:48 0:31:03 0:11:45 smithi main centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-inline/yes 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
pass 7515790 2024-01-12 21:56:40 2024-01-13 08:36:01 2024-01-13 09:03:20 0:27:19 0:16:41 0:10:38 smithi main centos 8.stream rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/classic msgr-failures/osd-delay objectstore/bluestore-comp-snappy rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{centos_8} thrashers/careful thrashosds-health workloads/ec-rados-plugin=lrc-k=4-m=2-l=3} 3
fail 7515791 2024-01-12 21:56:41 2024-01-13 08:36:11 2024-01-13 09:05:11 0:29:00 0:17:08 0:11:52 smithi main rhel 8.6 rados/cephadm/osds/{0-distro/rhel_8.6_container_tools_rhel8 0-nvme-loop 1-start 2-ops/repave-all} 2
Failure Reason:

reached maximum tries (121) after waiting for 120 seconds

pass 7515792 2024-01-12 21:56:41 2024-01-13 08:42:03 2024-01-13 09:08:47 0:26:44 0:15:25 0:11:19 smithi main centos 8.stream rados/singleton-nomsgr/{all/large-omap-object-warnings mon_election/classic rados supported-random-distro$/{centos_8}} 1
fail 7515793 2024-01-12 21:56:42 2024-01-13 08:42:03 2024-01-13 09:02:33 0:20:30 0:09:47 0:10:43 smithi main ubuntu 20.04 rados/cephadm/smoke/{0-nvme-loop distro/ubuntu_20.04 fixed-2 mon_election/classic start} 2
Failure Reason:

Command failed on smithi106 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:445562ab4bc3ddfb386936119050695810860bcb shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d3f01574-b1f1-11ee-95ac-87774f69a715 -- ceph orch daemon add osd smithi106:/dev/nvme4n1'

pass 7515794 2024-01-12 21:56:43 2024-01-13 08:42:04 2024-01-13 09:16:35 0:34:31 0:21:17 0:13:14 smithi main centos 8.stream rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-recovery} 3-scrub-overrides/{default} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/osd-delay msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{centos_8} thrashers/default thrashosds-health workloads/admin_socket_objecter_requests} 2
pass 7515795 2024-01-12 21:56:44 2024-01-13 08:43:35 2024-01-13 09:05:16 0:21:41 0:12:41 0:09:00 smithi main ubuntu 20.04 rados/perf/{ceph mon_election/connectivity objectstore/bluestore-comp openstack scheduler/wpq_default_shards settings/optimized ubuntu_latest workloads/radosbench_4M_seq_read} 1
fail 7515796 2024-01-12 21:56:45 2024-01-13 08:43:35 2024-01-13 09:10:41 0:27:06 0:21:19 0:05:47 smithi main rhel 8.6 rados/cephadm/smoke-roleless/{0-distro/rhel_8.6_container_tools_3.0 0-nvme-loop 1-start 2-services/basic 3-final} 2
Failure Reason:

reached maximum tries (121) after waiting for 120 seconds

pass 7515797 2024-01-12 21:56:46 2024-01-13 08:43:46 2024-01-13 09:26:12 0:42:26 0:29:29 0:12:57 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/classic task/test_orch_cli_mon} 5
pass 7515798 2024-01-12 21:56:46 2024-01-13 08:44:36 2024-01-13 09:13:45 0:29:09 0:22:17 0:06:52 smithi main rhel 8.6 rados/standalone/{supported-random-distro$/{rhel_8} workloads/mgr} 1
pass 7515799 2024-01-12 21:56:47 2024-01-13 08:44:36 2024-01-13 10:42:42 1:58:06 1:47:44 0:10:22 smithi main ubuntu 20.04 rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/normal mon_election/connectivity msgr-failures/few objectstore/filestore-xfs rados recovery-overrides/{default} supported-random-distro$/{ubuntu_latest} thrashers/fastread thrashosds-health workloads/ec-radosbench} 2
pass 7515800 2024-01-12 21:56:48 2024-01-13 08:44:37 2024-01-13 09:36:52 0:52:15 0:37:55 0:14:20 smithi main centos 8.stream rados/singleton/{all/thrash-eio mon_election/connectivity msgr-failures/many msgr/async-v1only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{centos_8}} 2
pass 7515801 2024-01-12 21:56:49 2024-01-13 08:44:57 2024-01-13 09:48:56 1:03:59 0:54:06 0:09:53 smithi main ubuntu 18.04 rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/mimic-v1only backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{ubuntu_18.04} mon_election/classic msgr-failures/fastclose rados thrashers/mapgap thrashosds-health workloads/snaps-few-objects} 3
fail 7515802 2024-01-12 21:56:50 2024-01-13 08:45:28 2024-01-13 09:09:01 0:23:33 0:16:51 0:06:42 smithi main rhel 8.6 rados/cephadm/smoke-roleless/{0-distro/rhel_8.6_container_tools_rhel8 0-nvme-loop 1-start 2-services/client-keyring 3-final} 2
Failure Reason:

reached maximum tries (121) after waiting for 120 seconds

pass 7515803 2024-01-12 21:56:51 2024-01-13 08:45:38 2024-01-13 09:13:56 0:28:18 0:17:50 0:10:28 smithi main rhel 8.6 rados/singleton-nomsgr/{all/lazy_omap_stats_output mon_election/connectivity rados supported-random-distro$/{rhel_8}} 1
fail 7515804 2024-01-12 21:56:51 2024-01-13 08:48:49 2024-01-13 09:24:09 0:35:20 0:24:35 0:10:45 smithi main centos 8.stream rados/cephadm/dashboard/{0-distro/ignorelist_health task/test_e2e} 2
Failure Reason:

Command failed (workunit test cephadm/test_dashboard_e2e.sh) on smithi052 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=445562ab4bc3ddfb386936119050695810860bcb TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_dashboard_e2e.sh'

pass 7515805 2024-01-12 21:56:52 2024-01-13 08:49:09 2024-01-13 10:07:14 1:18:05 1:07:46 0:10:19 smithi main ubuntu 20.04 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/osd-dispatch-delay msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{ubuntu_latest} thrashers/mapgap thrashosds-health workloads/cache-agent-big} 2
pass 7515806 2024-01-12 21:56:53 2024-01-13 08:49:10 2024-01-13 09:36:15 0:47:05 0:34:09 0:12:56 smithi main centos 8.stream rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/osd-delay objectstore/bluestore-comp-snappy rados recovery-overrides/{more-active-recovery} supported-random-distro$/{centos_8} thrashers/careful thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} 2
pass 7515807 2024-01-12 21:56:54 2024-01-13 08:51:20 2024-01-13 09:32:53 0:41:33 0:31:44 0:09:49 smithi main centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-inline/no 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
pass 7515808 2024-01-12 21:56:55 2024-01-13 08:51:41 2024-01-13 09:28:35 0:36:54 0:29:34 0:07:20 smithi main rhel 8.6 rados/singleton/{all/thrash-rados/{thrash-rados thrashosds-health} mon_election/classic msgr-failures/none msgr/async-v2only objectstore/bluestore-comp-snappy rados supported-random-distro$/{rhel_8}} 2
fail 7515809 2024-01-12 21:56:56 2024-01-13 08:53:01 2024-01-13 09:13:02 0:20:01 0:10:03 0:09:58 smithi main centos 8.stream rados/cephadm/smoke/{0-nvme-loop distro/centos_8.stream_container_tools fixed-2 mon_election/connectivity start} 2
Failure Reason:

Command failed on smithi055 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:445562ab4bc3ddfb386936119050695810860bcb shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 7cbb7918-b1f3-11ee-95ac-87774f69a715 -- ceph orch daemon add osd smithi055:/dev/nvme4n1'

pass 7515810 2024-01-12 21:56:57 2024-01-13 08:54:02 2024-01-13 09:22:53 0:28:51 0:16:38 0:12:13 smithi main ubuntu 20.04 rados/cephadm/smoke-singlehost/{0-distro$/{ubuntu_20.04} 1-start 2-services/rgw 3-final} 1
pass 7515811 2024-01-12 21:56:57 2024-01-13 08:54:02 2024-01-13 09:41:32 0:47:30 0:36:50 0:10:40 smithi main centos 8.stream rados/cephadm/thrash/{0-distro/centos_8.stream_container_tools 1-start 2-thrash 3-tasks/snaps-few-objects fixed-2 msgr/async-v1only root} 2
pass 7515812 2024-01-12 21:56:58 2024-01-13 08:54:13 2024-01-13 09:24:02 0:29:49 0:22:17 0:07:32 smithi main rhel 8.6 rados/multimon/{clusters/9 mon_election/classic msgr-failures/few msgr/async no_pools objectstore/bluestore-comp-zlib rados supported-random-distro$/{rhel_8} tasks/mon_recovery} 3
pass 7515813 2024-01-12 21:56:59 2024-01-13 08:54:23 2024-01-13 09:21:50 0:27:27 0:16:36 0:10:51 smithi main centos 8.stream rados/singleton-nomsgr/{all/librados_hello_world mon_election/classic rados supported-random-distro$/{centos_8}} 1
pass 7515814 2024-01-12 21:57:00 2024-01-13 08:54:43 2024-01-13 09:34:54 0:40:11 0:30:21 0:09:50 smithi main ubuntu 20.04 rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/few msgr/async objectstore/filestore-xfs rados supported-random-distro$/{ubuntu_latest} tasks/rados_workunit_loadgen_big} 2
fail 7515815 2024-01-12 21:57:01 2024-01-13 08:54:54 2024-01-13 09:29:24 0:34:30 0:23:52 0:10:38 smithi main ubuntu 20.04 rados/cephadm/upgrade/{1-start-distro/1-start-ubuntu_20.04 2-repo_digest/defaut 3-upgrade/simple 4-wait 5-upgrade-ls mon_election/classic} 2
Failure Reason:

Command failed on smithi084 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image docker.io/ceph/ceph:v15.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 82e68f94-b1f3-11ee-95ac-87774f69a715 -e sha1=445562ab4bc3ddfb386936119050695810860bcb -- bash -c \'ceph versions | jq -e \'"\'"\'.overall | length == 1\'"\'"\'\''

pass 7515816 2024-01-12 21:57:01 2024-01-13 08:55:14 2024-01-13 09:23:47 0:28:33 0:13:26 0:15:07 smithi main ubuntu 20.04 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/fastclose msgr/async-v1only objectstore/bluestore-comp-zlib rados supported-random-distro$/{ubuntu_latest} thrashers/morepggrow thrashosds-health workloads/cache-agent-small} 2
pass 7515817 2024-01-12 21:57:02 2024-01-13 09:01:35 2024-01-13 09:37:06 0:35:31 0:24:47 0:10:44 smithi main centos 8.stream rados/cephadm/with-work/{0-distro/centos_8.stream_container_tools fixed-2 mode/root mon_election/connectivity msgr/async-v1only start tasks/rados_python} 2
pass 7515818 2024-01-12 21:57:03 2024-01-13 09:01:35 2024-01-13 09:34:40 0:33:05 0:21:30 0:11:35 smithi main ubuntu 20.04 rados/singleton/{all/thrash_cache_writeback_proxy_none mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-comp-zlib rados supported-random-distro$/{ubuntu_latest}} 2
pass 7515819 2024-01-12 21:57:04 2024-01-13 09:02:36 2024-01-13 09:23:27 0:20:51 0:11:50 0:09:01 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/connectivity task/test_adoption} 1
pass 7515820 2024-01-12 21:57:05 2024-01-13 09:02:36 2024-01-13 09:48:27 0:45:51 0:34:20 0:11:31 smithi main centos 8.stream rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async-v1only objectstore/bluestore-comp-snappy rados tasks/mon_recovery validater/valgrind} 2
pass 7515821 2024-01-12 21:57:06 2024-01-13 09:03:27 2024-01-13 09:23:23 0:19:56 0:10:28 0:09:28 smithi main ubuntu 20.04 rados/perf/{ceph mon_election/classic objectstore/bluestore-low-osd-mem-target openstack scheduler/dmclock_1Shard_16Threads settings/optimized ubuntu_latest workloads/radosbench_4M_write} 1
pass 7515822 2024-01-12 21:57:06 2024-01-13 09:03:27 2024-01-13 09:31:57 0:28:30 0:17:16 0:11:14 smithi main ubuntu 20.04 rados/monthrash/{ceph clusters/3-mons mon_election/classic msgr-failures/mon-delay msgr/async-v2only objectstore/bluestore-comp-snappy rados supported-random-distro$/{ubuntu_latest} thrashers/sync-many workloads/pool-create-delete} 2
pass 7515823 2024-01-12 21:57:07 2024-01-13 09:03:47 2024-01-13 09:30:47 0:27:00 0:21:09 0:05:51 smithi main rhel 8.6 rados/mgr/{clusters/{2-node-mgr} debug/mgr mgr_ttl_cache/disable mon_election/classic objectstore/bluestore-comp-snappy supported-random-distro$/{rhel_8} tasks/failover} 2
pass 7515824 2024-01-12 21:57:08 2024-01-13 09:03:48 2024-01-13 09:32:35 0:28:47 0:20:26 0:08:21 smithi main rhel 8.6 rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/classic msgr-failures/few objectstore/bluestore-comp-snappy rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{rhel_8} thrashers/careful thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} 4
fail 7515825 2024-01-12 21:57:09 2024-01-13 09:04:28 2024-01-13 09:31:59 0:27:31 0:18:38 0:08:53 smithi main ubuntu 18.04 rados/cephadm/smoke-roleless/{0-distro/ubuntu_18.04 0-nvme-loop 1-start 2-services/iscsi 3-final} 2
Failure Reason:

reached maximum tries (121) after waiting for 120 seconds

pass 7515826 2024-01-12 21:57:10 2024-01-13 09:04:39 2024-01-13 09:48:51 0:44:12 0:33:27 0:10:45 smithi main ubuntu 18.04 rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/mimic backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{ubuntu_18.04} mon_election/connectivity msgr-failures/few rados thrashers/morepggrow thrashosds-health workloads/test_rbd_api} 3
pass 7515827 2024-01-12 21:57:11 2024-01-13 09:05:19 2024-01-13 09:34:28 0:29:09 0:22:38 0:06:31 smithi main rhel 8.6 rados/singleton-nomsgr/{all/msgr mon_election/connectivity rados supported-random-distro$/{rhel_8}} 1
pass 7515828 2024-01-12 21:57:11 2024-01-13 09:05:30 2024-01-13 09:41:18 0:35:48 0:22:16 0:13:32 smithi main centos 8.stream rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{centos_8} thrashers/none thrashosds-health workloads/cache-pool-snaps-readproxy} 2
fail 7515829 2024-01-12 21:57:12 2024-01-13 09:08:00 2024-01-13 09:38:06 0:30:06 0:19:04 0:11:02 smithi main ubuntu 18.04 rados/cephadm/osds/{0-distro/ubuntu_18.04 0-nvme-loop 1-start 2-ops/rm-zap-add} 2
Failure Reason:

reached maximum tries (121) after waiting for 120 seconds

pass 7515830 2024-01-12 21:57:13 2024-01-13 09:08:51 2024-01-13 09:26:57 0:18:06 0:09:04 0:09:02 smithi main ubuntu 20.04 rados/singleton/{all/watch-notify-same-primary mon_election/classic msgr-failures/many msgr/async-v1only objectstore/bluestore-comp-zstd rados supported-random-distro$/{ubuntu_latest}} 1
pass 7515831 2024-01-12 21:57:14 2024-01-13 09:09:11 2024-01-13 09:36:26 0:27:15 0:16:57 0:10:18 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/classic task/test_cephadm} 1
pass 7515832 2024-01-12 21:57:15 2024-01-13 09:09:12 2024-01-13 09:51:51 0:42:39 0:33:49 0:08:50 smithi main rhel 8.6 rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/connectivity msgr-failures/osd-dispatch-delay objectstore/bluestore-comp-zlib rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{rhel_8} thrashers/default thrashosds-health workloads/ec-rados-plugin=jerasure-k=4-m=2} 3
pass 7515833 2024-01-12 21:57:15 2024-01-13 11:46:47 8737 smithi main ubuntu 20.04 rados/objectstore/{backends/filestore-idempotent-aio-journal supported-random-distro$/{ubuntu_latest}} 1
pass 7515834 2024-01-12 21:57:16 2024-01-13 09:10:43 2024-01-13 09:45:43 0:35:00 0:24:57 0:10:03 smithi main centos 8.stream rados/valgrind-leaks/{1-start 2-inject-leak/none centos_latest} 1
fail 7515835 2024-01-12 21:57:17 2024-01-13 09:10:43 2024-01-13 09:32:26 0:21:43 0:13:45 0:07:58 smithi main rhel 8.6 rados/cephadm/smoke/{0-nvme-loop distro/rhel_8.6_container_tools_3.0 fixed-2 mon_election/classic start} 2
Failure Reason:

Command failed on smithi017 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:445562ab4bc3ddfb386936119050695810860bcb shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid e808df7e-b1f5-11ee-95ac-87774f69a715 -- ceph orch daemon add osd smithi017:/dev/nvme4n1'

pass 7515836 2024-01-12 21:57:18 2024-01-13 09:11:33 2024-01-13 09:54:10 0:42:37 0:32:09 0:10:28 smithi main ubuntu 20.04 rados/thrash-erasure-code-overwrites/{bluestore-bitmap ceph clusters/{fixed-2 openstack} fast/fast mon_election/classic msgr-failures/osd-dispatch-delay rados recovery-overrides/{default} supported-random-distro$/{ubuntu_latest} thrashers/careful thrashosds-health workloads/ec-pool-snaps-few-objects-overwrites} 2
fail 7515837 2024-01-12 21:57:19 2024-01-13 09:12:14 2024-01-13 09:42:51 0:30:37 0:20:17 0:10:20 smithi main ubuntu 20.04 rados/cephadm/smoke-roleless/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-services/mirror 3-final} 2
Failure Reason:

reached maximum tries (121) after waiting for 120 seconds

pass 7515838 2024-01-12 21:57:20 2024-01-13 09:13:04 2024-01-13 09:51:13 0:38:09 0:28:18 0:09:51 smithi main centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-inline/yes 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
pass 7515839 2024-01-12 21:57:20 2024-01-13 09:13:55 2024-01-13 09:47:53 0:33:58 0:27:42 0:06:16 smithi main rhel 8.6 rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/fast mon_election/classic msgr-failures/osd-delay objectstore/bluestore-bitmap rados recovery-overrides/{more-active-recovery} supported-random-distro$/{rhel_8} thrashers/minsize_recovery thrashosds-health workloads/ec-small-objects-balanced} 2
pass 7515840 2024-01-12 21:57:21 2024-01-13 09:14:35 2024-01-13 09:48:34 0:33:59 0:22:01 0:11:58 smithi main ubuntu 20.04 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-partial-recovery} 3-scrub-overrides/{default} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/osd-delay msgr/async objectstore/bluestore-hybrid rados supported-random-distro$/{ubuntu_latest} thrashers/pggrow thrashosds-health workloads/cache-pool-snaps} 2
pass 7515841 2024-01-12 21:57:22 2024-01-13 09:16:36 2024-01-13 09:33:43 0:17:07 0:08:04 0:09:03 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/connectivity task/test_cephadm_repos} 1
pass 7515842 2024-01-12 21:57:23 2024-01-13 09:16:36 2024-01-13 09:54:57 0:38:21 0:24:52 0:13:29 smithi main ubuntu 20.04 rados/singleton-nomsgr/{all/multi-backfill-reject mon_election/classic rados supported-random-distro$/{ubuntu_latest}} 2
pass 7515843 2024-01-12 21:57:24 2024-01-13 09:18:07 2024-01-13 09:45:10 0:27:03 0:16:54 0:10:09 smithi main rhel 8.6 rados/singleton/{all/admin-socket mon_election/connectivity msgr-failures/none msgr/async-v2only objectstore/bluestore-hybrid rados supported-random-distro$/{rhel_8}} 1
pass 7515844 2024-01-12 21:57:25 2024-01-13 09:21:58 2024-01-13 10:09:41 0:47:43 0:35:12 0:12:31 smithi main ubuntu 18.04 rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/nautilus-v1only backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{ubuntu_18.04} mon_election/classic msgr-failures/osd-delay rados thrashers/none thrashosds-health workloads/cache-snaps} 3
fail 7515845 2024-01-12 21:57:25 2024-01-13 09:23:28 2024-01-13 09:51:39 0:28:11 0:18:03 0:10:08 smithi main centos 8.stream rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.stream_container_tools 1-bootstrap/octopus 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
Failure Reason:

Command failed on smithi059 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image docker.io/ceph/ceph:v15 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid a8060102-b1f7-11ee-95ac-87774f69a715 -e sha1=445562ab4bc3ddfb386936119050695810860bcb -- bash -c \'ceph versions | jq -e \'"\'"\'.overall | length == 1\'"\'"\'\''

fail 7515846 2024-01-12 21:57:26 2024-01-13 09:23:49 2024-01-13 09:49:00 0:25:11 0:14:36 0:10:35 smithi main centos 8.stream rados/cephadm/smoke-roleless/{0-distro/centos_8.stream_container_tools 0-nvme-loop 1-start 2-services/nfs-ingress-rgw-bucket 3-final} 2
Failure Reason:

reached maximum tries (121) after waiting for 120 seconds

pass 7515847 2024-01-12 21:57:27 2024-01-13 09:23:49 2024-01-13 10:09:20 0:45:31 0:38:15 0:07:16 smithi main rhel 8.6 rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/osd-dispatch-delay objectstore/bluestore-comp-zlib rados recovery-overrides/{more-async-recovery} supported-random-distro$/{rhel_8} thrashers/default thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} 2
pass 7515848 2024-01-12 21:57:28 2024-01-13 09:24:10 2024-01-13 09:55:51 0:31:41 0:21:06 0:10:35 smithi main centos 8.stream rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/many msgr/async-v1only objectstore/bluestore-bitmap rados supported-random-distro$/{centos_8} tasks/rados_workunit_loadgen_mix} 2
pass 7515849 2024-01-12 21:57:29 2024-01-13 09:24:10 2024-01-13 10:06:57 0:42:47 0:32:59 0:09:48 smithi main centos 8.stream rados/standalone/{supported-random-distro$/{centos_8} workloads/misc} 1
pass 7515850 2024-01-12 21:57:30 2024-01-13 09:24:10 2024-01-13 10:09:51 0:45:41 0:35:19 0:10:22 smithi main centos 8.stream rados/cephadm/thrash/{0-distro/centos_8.stream_container_tools 1-start 2-thrash 3-tasks/rados_api_tests fixed-2 msgr/async-v2only root} 2
pass 7515851 2024-01-12 21:57:30 2024-01-13 09:24:41 2024-01-13 09:54:41 0:30:00 0:20:56 0:09:04 smithi main ubuntu 20.04 rados/perf/{ceph mon_election/connectivity objectstore/bluestore-stupid openstack scheduler/dmclock_default_shards settings/optimized ubuntu_latest workloads/radosbench_omap_write} 1
pass 7515852 2024-01-12 21:57:31 2024-01-13 09:24:41 2024-01-13 10:10:25 0:45:44 0:32:54 0:12:50 smithi main ubuntu 20.04 rados/singleton/{all/backfill-toofull mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{ubuntu_latest}} 1
pass 7515853 2024-01-12 21:57:32 2024-01-13 09:25:42 2024-01-13 10:09:58 0:44:16 0:36:08 0:08:08 smithi main rhel 8.6 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/osd-dispatch-delay msgr/async-v1only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{rhel_8} thrashers/careful thrashosds-health workloads/cache-snaps-balanced} 2
pass 7515854 2024-01-12 21:57:33 2024-01-13 09:26:12 2024-01-13 10:18:32 0:52:20 0:43:49 0:08:31 smithi main rhel 8.6 rados/cephadm/with-work/{0-distro/rhel_8.6_container_tools_3.0 fixed-2 mode/packaged mon_election/classic msgr/async-v2only start tasks/rados_api_tests} 2
pass 7515855 2024-01-12 21:57:34 2024-01-13 09:26:13 2024-01-13 10:01:07 0:34:54 0:27:59 0:06:55 smithi main rhel 8.6 rados/singleton-nomsgr/{all/osd_stale_reads mon_election/connectivity rados supported-random-distro$/{rhel_8}} 1
fail 7515856 2024-01-12 21:57:35 2024-01-13 09:26:13 2024-01-13 09:46:32 0:20:19 0:11:36 0:08:43 smithi main rhel 8.6 rados/cephadm/smoke/{0-nvme-loop distro/rhel_8.6_container_tools_rhel8 fixed-2 mon_election/connectivity start} 2
Failure Reason:

Command failed on smithi100 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:445562ab4bc3ddfb386936119050695810860bcb shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f29e5b24-b1f7-11ee-95ac-87774f69a715 -- ceph orch daemon add osd smithi100:/dev/nvme4n1'

pass 7515857 2024-01-12 21:57:35 2024-01-13 09:27:03 2024-01-13 09:49:32 0:22:29 0:11:06 0:11:23 smithi main centos 8.stream rados/multimon/{clusters/21 mon_election/connectivity msgr-failures/many msgr/async-v1only no_pools objectstore/bluestore-comp-zstd rados supported-random-distro$/{centos_8} tasks/mon_clock_no_skews} 3
pass 7515858 2024-01-12 21:57:36 2024-01-13 09:28:24 2024-01-13 09:53:22 0:24:58 0:14:40 0:10:18 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/classic task/test_iscsi_container/{centos_8.stream_container_tools test_iscsi_container}} 1
fail 7515859 2024-01-12 21:57:37 2024-01-13 09:28:24 2024-01-13 09:59:27 0:31:03 0:19:18 0:11:45 smithi main ubuntu 20.04 rados/cephadm/osds/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-ops/rm-zap-flag} 2
Failure Reason:

reached maximum tries (121) after waiting for 120 seconds

fail 7515860 2024-01-12 21:57:38 2024-01-13 09:28:45 2024-01-13 09:56:21 0:27:36 0:20:34 0:07:02 smithi main rhel 8.6 rados/cephadm/smoke-roleless/{0-distro/rhel_8.6_container_tools_3.0 0-nvme-loop 1-start 2-services/nfs-ingress-rgw-user 3-final} 2
Failure Reason:

reached maximum tries (121) after waiting for 120 seconds

pass 7515861 2024-01-12 21:57:39 2024-01-13 09:29:25 2024-01-13 09:51:53 0:22:28 0:11:40 0:10:48 smithi main centos 8.stream rados/singleton/{all/deduptool mon_election/connectivity msgr-failures/many msgr/async-v1only objectstore/bluestore-stupid rados supported-random-distro$/{centos_8}} 1
pass 7515862 2024-01-12 21:57:40 2024-01-13 09:30:46 2024-01-13 10:05:50 0:35:04 0:25:32 0:09:32 smithi main centos 8.stream rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-zlib rados tasks/rados_api_tests validater/lockdep} 2
pass 7515863 2024-01-12 21:57:40 2024-01-13 09:30:56 2024-01-13 10:07:07 0:36:11 0:28:55 0:07:16 smithi main rhel 8.6 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{default} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/fastclose msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{rhel_8} thrashers/default thrashosds-health workloads/cache-snaps} 2
pass 7515864 2024-01-12 21:57:41 2024-01-13 09:32:07 2024-01-13 09:55:17 0:23:10 0:13:44 0:09:26 smithi main centos 8.stream rados/mgr/{clusters/{2-node-mgr} debug/mgr mgr_ttl_cache/enable mon_election/connectivity objectstore/bluestore-comp-zlib supported-random-distro$/{centos_8} tasks/insights} 2
pass 7515865 2024-01-12 21:57:42 2024-01-13 09:32:07 2024-01-13 09:59:51 0:27:44 0:21:09 0:06:35 smithi main rhel 8.6 rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/connectivity msgr-failures/osd-delay objectstore/bluestore-comp-zlib rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{rhel_8} thrashers/default thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} 4
pass 7515866 2024-01-12 21:57:43 2024-01-13 09:32:37 2024-01-13 10:12:45 0:40:08 0:28:59 0:11:09 smithi main centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-inline/no 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
pass 7515867 2024-01-12 21:57:44 2024-01-13 09:32:38 2024-01-13 10:00:16 0:27:38 0:20:37 0:07:01 smithi main rhel 8.6 rados/monthrash/{ceph clusters/9-mons mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-comp-zlib rados supported-random-distro$/{rhel_8} thrashers/sync workloads/rados_5925} 2
pass 7515868 2024-01-12 21:57:45 2024-01-13 09:32:58 2024-01-13 09:53:16 0:20:18 0:09:19 0:10:59 smithi main ubuntu 20.04 rados/singleton-nomsgr/{all/pool-access mon_election/classic rados supported-random-distro$/{ubuntu_latest}} 1
pass 7515869 2024-01-12 21:57:45 2024-01-13 09:32:59 2024-01-13 10:25:56 0:52:57 0:42:53 0:10:04 smithi main centos 8.stream rados/cephadm/upgrade/{1-start-distro/1-start-centos_8.stream_container-tools 2-repo_digest/repo_digest 3-upgrade/staggered 4-wait 5-upgrade-ls mon_election/connectivity} 2
pass 7515870 2024-01-12 21:57:46 2024-01-13 09:33:09 2024-01-13 11:20:54 1:47:45 1:35:18 0:12:27 smithi main ubuntu 18.04 rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/nautilus-v2only backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{ubuntu_18.04} mon_election/connectivity msgr-failures/fastclose rados thrashers/pggrow thrashosds-health workloads/radosbench} 3
pass 7515871 2024-01-12 21:57:47 2024-01-13 09:34:50 2024-01-13 10:09:50 0:35:00 0:25:44 0:09:16 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/connectivity task/test_nfs} 1
pass 7515872 2024-01-12 21:57:48 2024-01-13 09:34:50 2024-01-13 09:58:34 0:23:44 0:11:11 0:12:33 smithi main ubuntu 20.04 rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/classic msgr-failures/fastclose objectstore/bluestore-comp-zstd rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{ubuntu_latest} thrashers/fastread thrashosds-health workloads/ec-rados-plugin=lrc-k=4-m=2-l=3} 3
pass 7515873 2024-01-12 21:57:49 2024-01-13 09:36:21 2024-01-13 10:01:29 0:25:08 0:17:57 0:07:11 smithi main rhel 8.6 rados/singleton/{all/divergent_priors mon_election/classic msgr-failures/none msgr/async-v2only objectstore/filestore-xfs rados supported-random-distro$/{rhel_8}} 1
fail 7515874 2024-01-12 21:57:50 2024-01-13 09:36:21 2024-01-13 09:57:26 0:21:05 0:08:59 0:12:06 smithi main ubuntu 18.04 rados/cephadm/smoke/{0-nvme-loop distro/ubuntu_18.04 fixed-2 mon_election/classic start} 2
Failure Reason:

Command failed on smithi039 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:445562ab4bc3ddfb386936119050695810860bcb shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 7d2c354e-b1f9-11ee-95ac-87774f69a715 -- ceph orch daemon add osd smithi039:/dev/nvme4n1'

pass 7515875 2024-01-12 21:57:50 2024-01-13 09:37:01 2024-01-13 10:00:50 0:23:49 0:11:06 0:12:43 smithi main ubuntu 20.04 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-active-recovery} 3-scrub-overrides/{default} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/few msgr/async objectstore/filestore-xfs rados supported-random-distro$/{ubuntu_latest} thrashers/mapgap thrashosds-health workloads/cache} 2
fail 7515876 2024-01-12 21:57:51 2024-01-13 09:37:12 2024-01-13 10:03:08 0:25:56 0:17:58 0:07:58 smithi main rhel 8.6 rados/cephadm/smoke-roleless/{0-distro/rhel_8.6_container_tools_rhel8 0-nvme-loop 1-start 2-services/nfs-ingress 3-final} 2
Failure Reason:

reached maximum tries (121) after waiting for 120 seconds

pass 7515877 2024-01-12 21:57:52 2024-01-13 09:38:12 2024-01-13 10:17:22 0:39:10 0:30:39 0:08:31 smithi main centos 8.stream rados/singleton-nomsgr/{all/recovery-unfound-found mon_election/connectivity rados supported-random-distro$/{centos_8}} 1
pass 7515878 2024-01-12 21:57:53 2024-01-13 09:38:13 2024-01-13 10:34:43 0:56:30 0:44:36 0:11:54 smithi main centos 8.stream rados/cephadm/thrash/{0-distro/centos_8.stream_container_tools 1-start 2-thrash 3-tasks/radosbench fixed-2 msgr/async root} 2
pass 7515879 2024-01-12 21:57:54 2024-01-13 12:11:34 8324 smithi main ubuntu 20.04 rados/objectstore/{backends/filestore-idempotent supported-random-distro$/{ubuntu_latest}} 1
pass 7515880 2024-01-12 21:57:55 2024-01-13 09:41:24 2024-01-13 10:19:02 0:37:38 0:26:45 0:10:53 smithi main centos 8.stream rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{centos_8} tasks/rados_workunit_loadgen_mostlyread} 2
pass 7515881 2024-01-12 21:57:56 2024-01-13 09:41:34 2024-01-13 10:04:50 0:23:16 0:12:25 0:10:51 smithi main ubuntu 20.04 rados/perf/{ceph mon_election/classic objectstore/bluestore-basic-min-osd-mem-target openstack scheduler/wpq_default_shards settings/optimized ubuntu_latest workloads/sample_fio} 1
pass 7515882 2024-01-12 21:57:57 2024-01-13 09:42:55 2024-01-13 10:22:25 0:39:30 0:31:39 0:07:51 smithi main rhel 8.6 rados/cephadm/with-work/{0-distro/rhel_8.6_container_tools_rhel8 fixed-2 mode/root mon_election/connectivity msgr/async start tasks/rados_python} 2
pass 7515883 2024-01-12 21:57:57 2024-01-13 09:45:16 2024-01-13 10:22:01 0:36:45 0:28:08 0:08:37 smithi main rhel 8.6 rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/normal mon_election/connectivity msgr-failures/osd-dispatch-delay objectstore/bluestore-comp-lz4 rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{rhel_8} thrashers/morepggrow thrashosds-health workloads/ec-small-objects-fast-read} 2
pass 7515884 2024-01-12 21:57:58 2024-01-13 09:46:36 2024-01-13 10:09:44 0:23:08 0:17:18 0:05:50 smithi main rhel 8.6 rados/singleton/{all/divergent_priors2 mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-bitmap rados supported-random-distro$/{rhel_8}} 1
pass 7515885 2024-01-12 21:57:59 2024-01-13 09:46:36 2024-01-13 10:11:02 0:24:26 0:14:21 0:10:05 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/classic task/test_orch_cli} 1
pass 7515886 2024-01-12 21:58:00 2024-01-13 09:47:57 2024-01-13 10:30:07 0:42:10 0:31:46 0:10:24 smithi main ubuntu 20.04 rados/singleton-bluestore/{all/cephtool mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{ubuntu_latest}} 1
dead 7515887 2024-01-12 21:58:01 2024-01-13 09:47:57 2024-01-13 21:57:08 12:09:11 smithi main ubuntu 18.04 rados/cephadm/smoke-roleless/{0-distro/ubuntu_18.04 0-nvme-loop 1-start 2-services/nfs-ingress2 3-final} 2
Failure Reason:

hit max job timeout

pass 7515888 2024-01-12 21:58:02 2024-01-13 09:48:28 2024-01-13 10:25:31 0:37:03 0:28:12 0:08:51 smithi main centos 8.stream rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-active-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/osd-delay msgr/async-v1only objectstore/bluestore-bitmap rados supported-random-distro$/{centos_8} thrashers/morepggrow thrashosds-health workloads/pool-snaps-few-objects} 2
pass 7515889 2024-01-12 21:58:03 2024-01-13 09:48:38 2024-01-13 10:28:08 0:39:30 0:29:41 0:09:49 smithi main centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-inline/yes 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
pass 7515890 2024-01-12 21:58:04 2024-01-13 09:48:59 2024-01-13 10:25:18 0:36:19 0:26:05 0:10:14 smithi main ubuntu 20.04 rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/fastclose objectstore/bluestore-comp-zstd rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{ubuntu_latest} thrashers/mapgap thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} 2
pass 7515891 2024-01-12 21:58:04 2024-01-13 09:48:59 2024-01-13 10:16:00 0:27:01 0:20:15 0:06:46 smithi main rhel 8.6 rados/singleton-nomsgr/{all/version-number-sanity mon_election/classic rados supported-random-distro$/{rhel_8}} 1
fail 7515892 2024-01-12 21:58:05 2024-01-13 09:48:59 2024-01-13 10:14:16 0:25:17 0:14:54 0:10:23 smithi main centos 8.stream rados/cephadm/osds/{0-distro/centos_8.stream_container_tools 0-nvme-loop 1-start 2-ops/rm-zap-wait} 2
Failure Reason:

reached maximum tries (121) after waiting for 120 seconds

pass 7515893 2024-01-12 21:58:06 2024-01-13 09:49:10 2024-01-13 10:24:26 0:35:16 0:29:05 0:06:11 smithi main rhel 8.6 rados/thrash-erasure-code-overwrites/{bluestore-bitmap ceph clusters/{fixed-2 openstack} fast/normal mon_election/connectivity msgr-failures/fastclose rados recovery-overrides/{more-async-recovery} supported-random-distro$/{rhel_8} thrashers/default thrashosds-health workloads/ec-small-objects-fast-read-overwrites} 2
pass 7515894 2024-01-12 21:58:07 2024-01-13 09:49:40 2024-01-13 10:34:52 0:45:12 0:34:48 0:10:24 smithi main ubuntu 18.04 rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/nautilus backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{ubuntu_18.04} mon_election/classic msgr-failures/few rados thrashers/careful thrashosds-health workloads/rbd_cls} 3
pass 7515895 2024-01-12 21:58:08 2024-01-13 09:51:21 2024-01-13 10:16:40 0:25:19 0:17:50 0:07:29 smithi main rhel 8.6 rados/singleton/{all/dump-stuck mon_election/classic msgr-failures/many msgr/async-v1only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{rhel_8}} 1
fail 7515896 2024-01-12 21:58:09 2024-01-13 10:13:27 602 smithi main ubuntu 20.04 rados/cephadm/smoke/{0-nvme-loop distro/ubuntu_20.04 fixed-2 mon_election/connectivity start} 2
Failure Reason:

Command failed on smithi059 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:445562ab4bc3ddfb386936119050695810860bcb shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid b54d2d28-b1fb-11ee-95ac-87774f69a715 -- ceph orch daemon add osd smithi059:/dev/nvme4n1'

pass 7515897 2024-01-12 21:58:10 2024-01-13 09:51:42 2024-01-13 10:10:59 0:19:17 0:10:06 0:09:11 smithi main centos 8.stream rados/multimon/{clusters/3 mon_election/classic msgr-failures/few msgr/async-v2only no_pools objectstore/bluestore-hybrid rados supported-random-distro$/{centos_8} tasks/mon_clock_with_skews} 2
pass 7515898 2024-01-12 21:58:10 2024-01-13 09:51:52 2024-01-13 10:27:32 0:35:40 0:23:28 0:12:12 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/connectivity task/test_orch_cli_mon} 5
pass 7515899 2024-01-12 21:58:11 2024-01-13 09:54:13 2024-01-13 10:38:52 0:44:39 0:33:44 0:10:55 smithi main centos 8.stream rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/osd-dispatch-delay msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{centos_8} thrashers/none thrashosds-health workloads/rados_api_tests} 2
fail 7515900 2024-01-12 21:58:12 2024-01-13 09:54:43 2024-01-13 10:27:50 0:33:07 0:20:13 0:12:54 smithi main ubuntu 20.04 rados/cephadm/smoke-roleless/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-services/nfs 3-final} 2
Failure Reason:

reached maximum tries (121) after waiting for 120 seconds

pass 7515901 2024-01-12 21:58:13 2024-01-13 09:55:04 2024-01-13 11:00:45 1:05:41 0:56:07 0:09:34 smithi main centos 8.stream rados/dashboard/{centos_8.stream_container_tools clusters/{2-node-mgr} debug/mgr mon_election/connectivity random-objectstore$/{bluestore-comp-lz4} tasks/dashboard} 2
pass 7515902 2024-01-12 21:58:14 2024-01-13 09:55:24 2024-01-13 10:23:50 0:28:26 0:17:39 0:10:47 smithi main ubuntu 20.04 rados/singleton-nomsgr/{all/admin_socket_output mon_election/connectivity rados supported-random-distro$/{ubuntu_latest}} 1
pass 7515903 2024-01-12 21:58:15 2024-01-13 10:25:36 1360 smithi main rhel 8.6 rados/standalone/{supported-random-distro$/{rhel_8} workloads/mon-stretch} 1
fail 7515904 2024-01-12 21:58:16 2024-01-13 09:55:55 2024-01-13 12:37:01 2:41:06 2:29:58 0:11:08 smithi main ubuntu 18.04 rados/upgrade/nautilus-x-singleton/{0-cluster/{openstack start} 1-install/nautilus 2-partial-upgrade/firsthalf 3-thrash/default 4-workload/{rbd-cls rbd-import-export readwrite snaps-few-objects} 5-workload/{radosbench rbd_api} 6-finish-upgrade 7-pacific 8-workload/{rbd-python snaps-many-objects} bluestore-bitmap mon_election/connectivity thrashosds-health ubuntu_18.04} 4
Failure Reason:

Command failed on smithi173 with status 100: 'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=16.2.14-456-g445562ab-1bionic ceph-mds=16.2.14-456-g445562ab-1bionic ceph-common=16.2.14-456-g445562ab-1bionic ceph-fuse=16.2.14-456-g445562ab-1bionic ceph-test=16.2.14-456-g445562ab-1bionic radosgw=16.2.14-456-g445562ab-1bionic python-ceph=16.2.14-456-g445562ab-1bionic libcephfs1=16.2.14-456-g445562ab-1bionic libcephfs-java=16.2.14-456-g445562ab-1bionic libcephfs-jni=16.2.14-456-g445562ab-1bionic librados2=16.2.14-456-g445562ab-1bionic librbd1=16.2.14-456-g445562ab-1bionic rbd-fuse=16.2.14-456-g445562ab-1bionic'

pass 7515905 2024-01-12 21:58:16 2024-01-13 09:57:15 2024-01-13 10:59:49 1:02:34 0:50:40 0:11:54 smithi main ubuntu 18.04 rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/octopus backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{ubuntu_18.04} mon_election/connectivity msgr-failures/osd-delay rados thrashers/default thrashosds-health workloads/snaps-few-objects} 3
pass 7515906 2024-01-12 21:58:17 2024-01-13 09:57:36 2024-01-13 11:24:46 1:27:10 1:16:37 0:10:33 smithi main ubuntu 20.04 rados/singleton/{all/ec-inconsistent-hinfo mon_election/connectivity msgr-failures/none msgr/async-v2only objectstore/bluestore-comp-snappy rados supported-random-distro$/{ubuntu_latest}} 1
pass 7515907 2024-01-12 21:58:18 2024-01-13 09:58:36 2024-01-13 12:07:35 2:08:59 1:58:34 0:10:25 smithi main centos 8.stream rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-comp-zstd rados tasks/rados_cls_all validater/valgrind} 2
pass 7515908 2024-01-12 21:58:19 2024-01-13 09:58:37 2024-01-13 10:37:08 0:38:31 0:30:41 0:07:50 smithi main rhel 8.6 rados/mgr/{clusters/{2-node-mgr} debug/mgr mgr_ttl_cache/disable mon_election/classic objectstore/bluestore-comp-zstd supported-random-distro$/{rhel_8} tasks/module_selftest} 2
pass 7515909 2024-01-12 21:58:20 2024-01-13 09:59:37 2024-01-13 10:23:07 0:23:30 0:14:19 0:09:11 smithi main centos 8.stream rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/classic msgr-failures/osd-dispatch-delay objectstore/bluestore-comp-zstd rados recovery-overrides/{more-active-recovery} supported-random-distro$/{centos_8} thrashers/careful thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} 4
fail 7515910 2024-01-12 21:58:21 2024-01-13 09:59:58 2024-01-13 10:35:39 0:35:41 0:25:35 0:10:06 smithi main centos 8.stream rados/cephadm/dashboard/{0-distro/centos_8.stream_container_tools task/test_e2e} 2
Failure Reason:

Command failed (workunit test cephadm/test_dashboard_e2e.sh) on smithi033 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=445562ab4bc3ddfb386936119050695810860bcb TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_dashboard_e2e.sh'

pass 7515911 2024-01-12 21:58:22 2024-01-13 10:00:18 2024-01-13 10:40:29 0:40:11 0:28:56 0:11:15 smithi main centos 8.stream rados/monthrash/{ceph clusters/3-mons mon_election/classic msgr-failures/mon-delay msgr/async-v1only objectstore/bluestore-comp-zstd rados supported-random-distro$/{centos_8} thrashers/force-sync-many workloads/rados_api_tests} 2
pass 7515912 2024-01-12 21:58:23 2024-01-13 10:00:59 2024-01-13 10:40:34 0:39:35 0:29:48 0:09:47 smithi main centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-inline/no 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
pass 7515913 2024-01-12 21:58:23 2024-01-13 10:01:39 2024-01-13 10:22:54 0:21:15 0:10:24 0:10:51 smithi main ubuntu 20.04 rados/perf/{ceph mon_election/connectivity objectstore/bluestore-bitmap openstack scheduler/dmclock_1Shard_16Threads settings/optimized ubuntu_latest workloads/sample_radosbench} 1
pass 7515914 2024-01-12 21:58:24 2024-01-13 10:03:10 2024-01-13 10:42:10 0:39:00 0:27:01 0:11:59 smithi main centos 8.stream rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.stream_container_tools 1-bootstrap/16.2.4 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
pass 7515915 2024-01-12 21:58:25 2024-01-13 10:05:00 2024-01-13 10:53:04 0:48:04 0:37:35 0:10:29 smithi main centos 8.stream rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/fastclose msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{centos_8} thrashers/pggrow thrashosds-health workloads/radosbench-high-concurrency} 2
pass 7515916 2024-01-12 21:58:26 2024-01-13 10:05:51 2024-01-13 10:31:14 0:25:23 0:13:14 0:12:09 smithi main centos 8.stream rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/many msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{centos_8} tasks/readwrite} 2
pass 7515917 2024-01-12 21:58:27 2024-01-13 10:07:11 2024-01-13 10:28:41 0:21:30 0:12:15 0:09:15 smithi main ubuntu 18.04 rados/cephadm/orchestrator_cli/{0-random-distro$/{ubuntu_18.04} 2-node-mgr orchestrator_cli} 2
pass 7515918 2024-01-12 21:58:28 2024-01-13 10:07:22 2024-01-13 10:46:31 0:39:09 0:26:50 0:12:19 smithi main centos 8.stream rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/connectivity msgr-failures/few objectstore/bluestore-hybrid rados recovery-overrides/{default} supported-random-distro$/{centos_8} thrashers/mapgap thrashosds-health workloads/ec-rados-plugin=jerasure-k=4-m=2} 3
pass 7515919 2024-01-12 21:58:29 2024-01-13 10:09:22 2024-01-13 10:30:20 0:20:58 0:09:49 0:11:09 smithi main ubuntu 20.04 rados/singleton-nomsgr/{all/balancer mon_election/classic rados supported-random-distro$/{ubuntu_latest}} 1
pass 7515920 2024-01-12 21:58:29 2024-01-13 10:09:43 2024-01-13 10:57:04 0:47:21 0:37:56 0:09:25 smithi main rhel 8.6 rados/cephadm/rbd_iscsi/{base/install cluster/{fixed-3 openstack} pool/datapool supported-random-distro$/{rhel_8} workloads/ceph_iscsi} 3
pass 7515921 2024-01-12 21:58:30 2024-01-13 10:09:53 2024-01-13 11:18:56 1:09:03 1:00:20 0:08:43 smithi main centos 8.stream rados/singleton/{all/ec-lost-unfound mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-comp-zlib rados supported-random-distro$/{centos_8}} 1
fail 7515922 2024-01-12 21:58:31 2024-01-13 10:09:54 2024-01-13 10:29:02 0:19:08 0:08:43 0:10:25 smithi main centos 8.stream rados/cephadm/smoke/{0-nvme-loop distro/centos_8.stream_container_tools fixed-2 mon_election/classic start} 2
Failure Reason:

Command failed on smithi080 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:445562ab4bc3ddfb386936119050695810860bcb shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid e7d4c0ce-b1fd-11ee-95ac-87774f69a715 -- ceph orch daemon add osd smithi080:/dev/nvme4n1'

fail 7515923 2024-01-12 21:58:32 2024-01-13 10:09:54 2024-01-13 10:33:11 0:23:17 0:14:28 0:08:49 smithi main centos 8.stream rados/cephadm/smoke-roleless/{0-distro/centos_8.stream_container_tools 0-nvme-loop 1-start 2-services/nfs2 3-final} 2
Failure Reason:

reached maximum tries (121) after waiting for 120 seconds

pass 7515924 2024-01-12 21:58:33 2024-01-13 10:10:05 2024-01-13 10:30:14 0:20:09 0:13:31 0:06:38 smithi main rhel 8.6 rados/cephadm/smoke-singlehost/{0-distro$/{rhel_8.6_container_tools_rhel8} 1-start 2-services/basic 3-final} 1
pass 7515925 2024-01-12 21:58:34 2024-01-13 10:10:35 2024-01-13 11:18:21 1:07:46 1:01:24 0:06:22 smithi main rhel 8.6 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-comp-zlib rados supported-random-distro$/{rhel_8} thrashers/careful thrashosds-health workloads/radosbench} 2
pass 7515926 2024-01-12 21:58:35 2024-01-13 10:11:06 2024-01-13 10:43:54 0:32:48 0:22:14 0:10:34 smithi main ubuntu 20.04 rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/fast mon_election/classic msgr-failures/fastclose objectstore/bluestore-comp-snappy rados recovery-overrides/{more-active-recovery} supported-random-distro$/{ubuntu_latest} thrashers/pggrow thrashosds-health workloads/ec-small-objects-many-deletes} 2
pass 7515927 2024-01-12 21:58:36 2024-01-13 10:11:46 2024-01-13 10:52:44 0:40:58 0:30:21 0:10:37 smithi main centos 8.stream rados/cephadm/thrash/{0-distro/centos_8.stream_container_tools 1-start 2-thrash 3-tasks/small-objects fixed-2 msgr/async-v1only root} 2
pass 7515928 2024-01-12 21:58:36 2024-01-13 10:12:47 2024-01-13 10:35:46 0:22:59 0:15:56 0:07:03 smithi main rhel 8.6 rados/objectstore/{backends/fusestore supported-random-distro$/{rhel_8}} 1
pass 7515929 2024-01-12 21:58:37 2024-01-13 10:12:47 2024-01-13 10:36:09 0:23:22 0:17:10 0:06:12 smithi main rhel 8.6 rados/singleton/{all/erasure-code-nonregression mon_election/connectivity msgr-failures/many msgr/async-v1only objectstore/bluestore-comp-zstd rados supported-random-distro$/{rhel_8}} 1
pass 7515930 2024-01-12 21:58:38 2024-01-13 10:13:37 2024-01-13 11:00:05 0:46:28 0:33:09 0:13:19 smithi main ubuntu 20.04 rados/cephadm/upgrade/{1-start-distro/1-start-ubuntu_20.04-15.2.9 2-repo_digest/defaut 3-upgrade/simple 4-wait 5-upgrade-ls mon_election/classic} 2
pass 7515931 2024-01-12 21:58:39 2024-01-13 10:14:18 2024-01-13 10:37:17 0:22:59 0:13:32 0:09:27 smithi main centos 8.stream rados/singleton-nomsgr/{all/cache-fs-trunc mon_election/connectivity rados supported-random-distro$/{centos_8}} 1
pass 7515932 2024-01-12 21:58:40 2024-01-13 10:14:18 2024-01-13 11:06:28 0:52:10 0:39:59 0:12:11 smithi main ubuntu 18.04 rados/cephadm/with-work/{0-distro/ubuntu_18.04 fixed-2 mode/packaged mon_election/classic msgr/async-v1only start tasks/rados_api_tests} 2
pass 7515933 2024-01-12 21:58:41 2024-01-13 10:16:49 2024-01-13 10:58:23 0:41:34 0:32:18 0:09:16 smithi main rhel 8.6 rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/few objectstore/bluestore-hybrid rados recovery-overrides/{default} supported-random-distro$/{rhel_8} thrashers/morepggrow thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} 2
pass 7515934 2024-01-12 21:58:42 2024-01-13 10:18:40 2024-01-13 10:39:47 0:21:07 0:11:41 0:09:26 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/classic task/test_adoption} 1
pass 7515935 2024-01-12 21:58:42 2024-01-13 10:18:40 2024-01-13 10:45:34 0:26:54 0:14:28 0:12:26 smithi main ubuntu 20.04 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-recovery} 3-scrub-overrides/{default} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/osd-delay msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{ubuntu_latest} thrashers/default thrashosds-health workloads/redirect} 2
fail 7515936 2024-01-12 21:58:43 2024-01-13 10:19:10 2024-01-13 10:49:44 0:30:34 0:19:41 0:10:53 smithi main rhel 8.6 rados/cephadm/osds/{0-distro/rhel_8.6_container_tools_3.0 0-nvme-loop 1-start 2-ops/rmdir-reactivate} 2
Failure Reason:

reached maximum tries (121) after waiting for 120 seconds

fail 7515937 2024-01-12 21:58:44 2024-01-13 10:22:11 2024-01-13 10:49:35 0:27:24 0:20:43 0:06:41 smithi main rhel 8.6 rados/cephadm/smoke-roleless/{0-distro/rhel_8.6_container_tools_3.0 0-nvme-loop 1-start 2-services/rgw-ingress 3-final} 2
Failure Reason:

reached maximum tries (121) after waiting for 120 seconds

pass 7515938 2024-01-12 21:58:45 2024-01-13 10:22:32 2024-01-13 11:18:49 0:56:17 0:46:11 0:10:06 smithi main ubuntu 20.04 rados/singleton/{all/lost-unfound-delete mon_election/classic msgr-failures/none msgr/async-v2only objectstore/bluestore-hybrid rados supported-random-distro$/{ubuntu_latest}} 1
pass 7515939 2024-01-12 21:58:46 2024-01-13 10:23:02 2024-01-13 10:48:27 0:25:25 0:15:07 0:10:18 smithi main ubuntu 20.04 rados/perf/{ceph mon_election/classic objectstore/bluestore-comp openstack scheduler/dmclock_default_shards settings/optimized ubuntu_latest workloads/fio_4K_rand_read} 1
pass 7515940 2024-01-12 21:58:46 2024-01-13 10:23:12 2024-01-13 10:47:57 0:24:45 0:16:24 0:08:21 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/connectivity task/test_cephadm} 1
pass 7515941 2024-01-12 21:58:47 2024-01-13 10:23:13 2024-01-13 10:44:12 0:20:59 0:11:27 0:09:32 smithi main centos 8.stream rados/singleton-nomsgr/{all/ceph-kvstore-tool mon_election/classic rados supported-random-distro$/{centos_8}} 1
pass 7515942 2024-01-12 21:58:48 2024-01-13 10:23:13 2024-01-13 10:51:08 0:27:55 0:20:24 0:07:31 smithi main rhel 8.6 rados/multimon/{clusters/6 mon_election/connectivity msgr-failures/many msgr/async no_pools objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{rhel_8} tasks/mon_recovery} 2
fail 7515943 2024-01-12 21:58:49 2024-01-13 10:23:54 2024-01-13 10:46:05 0:22:11 0:13:53 0:08:18 smithi main rhel 8.6 rados/cephadm/smoke/{0-nvme-loop distro/rhel_8.6_container_tools_3.0 fixed-2 mon_election/connectivity start} 2
Failure Reason:

Command failed on smithi032 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:445562ab4bc3ddfb386936119050695810860bcb shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 323f2a1c-b200-11ee-95ac-87774f69a715 -- ceph orch daemon add osd smithi032:/dev/nvme4n1'

pass 7515944 2024-01-12 21:58:50 2024-01-13 10:24:34 2024-01-13 11:09:03 0:44:29 0:33:39 0:10:50 smithi main ubuntu 18.04 rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/luminous-v1only backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{ubuntu_18.04} mon_election/classic msgr-failures/fastclose rados thrashers/mapgap thrashosds-health workloads/test_rbd_api} 3
pass 7515945 2024-01-12 21:58:51 2024-01-13 10:25:25 2024-01-13 10:51:33 0:26:08 0:16:25 0:09:43 smithi main ubuntu 20.04 rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-comp-zlib rados supported-random-distro$/{ubuntu_latest} tasks/repair_test} 2
pass 7515946 2024-01-12 21:58:52 2024-01-13 10:25:35 2024-01-13 10:54:24 0:28:49 0:17:33 0:11:16 smithi main ubuntu 20.04 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/osd-dispatch-delay msgr/async objectstore/bluestore-hybrid rados supported-random-distro$/{ubuntu_latest} thrashers/mapgap thrashosds-health workloads/redirect_promote_tests} 2
pass 7515947 2024-01-12 21:58:52 2024-01-13 10:25:46 2024-01-13 10:46:00 0:20:14 0:09:35 0:10:39 smithi main ubuntu 20.04 rados/mgr/{clusters/{2-node-mgr} debug/mgr mgr_ttl_cache/enable mon_election/connectivity objectstore/bluestore-hybrid supported-random-distro$/{ubuntu_latest} tasks/per_module_finisher_stats} 2
pass 7515948 2024-01-12 21:58:53 2024-01-13 10:26:06 2024-01-13 10:49:55 0:23:49 0:12:11 0:11:38 smithi main ubuntu 20.04 rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/connectivity msgr-failures/fastclose objectstore/bluestore-hybrid rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{ubuntu_latest} thrashers/default thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} 4
pass 7515949 2024-01-12 21:58:54 2024-01-13 10:27:37 2024-01-13 11:07:32 0:39:55 0:29:30 0:10:25 smithi main centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-inline/yes 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
pass 7515950 2024-01-12 21:58:55 2024-01-13 10:27:57 2024-01-13 10:51:23 0:23:26 0:13:20 0:10:06 smithi main centos 8.stream rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-hybrid rados tasks/mon_recovery validater/lockdep} 2
pass 7515951 2024-01-12 21:58:56 2024-01-13 10:28:17 2024-01-13 11:14:31 0:46:14 0:36:49 0:09:25 smithi main ubuntu 20.04 rados/singleton/{all/lost-unfound mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{ubuntu_latest}} 1
pass 7515952 2024-01-12 21:58:57 2024-01-13 10:28:18 2024-01-13 11:35:39 1:07:21 0:58:22 0:08:59 smithi main centos 8.stream rados/monthrash/{ceph clusters/9-mons mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-hybrid rados supported-random-distro$/{centos_8} thrashers/many workloads/rados_mon_osdmap_prune} 2
fail 7515953 2024-01-12 21:58:58 2024-01-13 10:28:18 2024-01-13 10:53:44 0:25:26 0:17:53 0:07:33 smithi main rhel 8.6 rados/cephadm/smoke-roleless/{0-distro/rhel_8.6_container_tools_rhel8 0-nvme-loop 1-start 2-services/rgw 3-final} 2
Failure Reason:

reached maximum tries (121) after waiting for 120 seconds

pass 7515954 2024-01-12 21:58:58 2024-01-13 10:28:49 2024-01-13 10:45:55 0:17:06 0:07:41 0:09:25 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/classic task/test_cephadm_repos} 1
pass 7515955 2024-01-12 21:58:59 2024-01-13 10:29:09 2024-01-13 11:01:14 0:32:05 0:22:05 0:10:00 smithi main centos 8.stream rados/thrash-erasure-code-overwrites/{bluestore-bitmap ceph clusters/{fixed-2 openstack} fast/fast mon_election/classic msgr-failures/few rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{centos_8} thrashers/fastread thrashosds-health workloads/ec-small-objects-overwrites} 2
pass 7515956 2024-01-12 21:59:00 2024-01-13 10:30:09 2024-01-13 10:49:01 0:18:52 0:07:56 0:10:56 smithi main ubuntu 20.04 rados/singleton-nomsgr/{all/ceph-post-file mon_election/connectivity rados supported-random-distro$/{ubuntu_latest}} 1
pass 7515957 2024-01-12 21:59:01 2024-01-13 10:30:20 2024-01-13 11:16:24 0:46:04 0:35:34 0:10:30 smithi main centos 8.stream rados/cephadm/thrash/{0-distro/centos_8.stream_container_tools 1-start 2-thrash 3-tasks/snaps-few-objects fixed-2 msgr/async-v2only root} 2
pass 7515958 2024-01-12 21:59:02 2024-01-13 10:31:20 2024-01-13 11:27:07 0:55:47 0:45:17 0:10:30 smithi main centos 8.stream rados/standalone/{supported-random-distro$/{centos_8} workloads/mon} 1
pass 7515959 2024-01-12 21:59:03 2024-01-13 10:31:21 2024-01-13 11:13:21 0:42:00 0:29:53 0:12:07 smithi main ubuntu 20.04 rados/cephadm/with-work/{0-distro/ubuntu_20.04 fixed-2 mode/root mon_election/connectivity msgr/async-v2only start tasks/rados_python} 2
pass 7515960 2024-01-12 21:59:03 2024-01-13 10:58:20 821 smithi main centos 8.stream rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/classic msgr-failures/osd-delay objectstore/bluestore-low-osd-mem-target rados recovery-overrides/{more-active-recovery} supported-random-distro$/{centos_8} thrashers/morepggrow thrashosds-health workloads/ec-rados-plugin=lrc-k=4-m=2-l=3} 3
pass 7515961 2024-01-12 21:59:04 2024-01-13 10:35:02 2024-01-13 10:57:06 0:22:04 0:12:32 0:09:32 smithi main ubuntu 20.04 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/fastclose msgr/async-v1only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{ubuntu_latest} thrashers/morepggrow thrashosds-health workloads/redirect_set_object} 2
dead 7515962 2024-01-12 21:59:05 2024-01-13 10:35:02 2024-01-13 22:44:09 12:09:07 smithi main rhel 8.6 rados/cephadm/osds/{0-distro/rhel_8.6_container_tools_3.0 0-nvme-loop 1-start 2-ops/repave-all} 2
Failure Reason:

hit max job timeout

pass 7515963 2024-01-12 21:59:06 2024-01-13 10:35:43 2024-01-13 10:54:55 0:19:12 0:10:39 0:08:33 smithi main centos 8.stream rados/singleton/{all/max-pg-per-osd.from-mon mon_election/classic msgr-failures/many msgr/async-v1only objectstore/bluestore-stupid rados supported-random-distro$/{centos_8}} 1
fail 7515964 2024-01-12 21:59:07 2024-01-13 10:35:53 2024-01-13 10:56:26 0:20:33 0:11:40 0:08:53 smithi main rhel 8.6 rados/cephadm/smoke/{0-nvme-loop distro/rhel_8.6_container_tools_rhel8 fixed-2 mon_election/classic start} 2
Failure Reason:

Command failed on smithi059 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:445562ab4bc3ddfb386936119050695810860bcb shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid b84e4fba-b201-11ee-95ac-87774f69a715 -- ceph orch daemon add osd smithi059:/dev/nvme4n1'

fail 7515965 2024-01-12 21:59:08 2024-01-13 10:37:14 2024-01-13 11:05:20 0:28:06 0:18:31 0:09:35 smithi main ubuntu 18.04 rados/cephadm/smoke-roleless/{0-distro/ubuntu_18.04 0-nvme-loop 1-start 2-services/basic 3-final} 2
Failure Reason:

reached maximum tries (121) after waiting for 120 seconds

pass 7515966 2024-01-12 21:59:09 2024-01-13 10:37:24 2024-01-13 10:57:02 0:19:38 0:08:17 0:11:21 smithi main ubuntu 20.04 rados/singleton-nomsgr/{all/export-after-evict mon_election/classic rados supported-random-distro$/{ubuntu_latest}} 1
pass 7515967 2024-01-12 21:59:09 2024-01-13 10:38:55 2024-01-13 11:13:30 0:34:35 0:22:10 0:12:25 smithi main centos 8.stream rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/normal mon_election/connectivity msgr-failures/few objectstore/bluestore-comp-zlib rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{centos_8} thrashers/careful thrashosds-health workloads/ec-small-objects} 2
pass 7515968 2024-01-12 21:59:10 2024-01-13 10:39:55 2024-01-13 11:34:00 0:54:05 0:45:25 0:08:40 smithi main ubuntu 18.04 rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/luminous backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{ubuntu_18.04} mon_election/connectivity msgr-failures/few rados thrashers/morepggrow thrashosds-health workloads/cache-snaps} 3
pass 7515969 2024-01-12 21:59:11 2024-01-13 10:40:36 2024-01-13 11:06:15 0:25:39 0:13:01 0:12:38 smithi main ubuntu 20.04 rados/perf/{ceph mon_election/connectivity objectstore/bluestore-low-osd-mem-target openstack scheduler/wpq_default_shards settings/optimized ubuntu_latest workloads/fio_4K_rand_rw} 1
pass 7515970 2024-01-12 21:59:12 2024-01-13 10:40:36 2024-01-13 11:03:35 0:22:59 0:14:27 0:08:32 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/connectivity task/test_iscsi_container/{centos_8.stream_container_tools test_iscsi_container}} 1
pass 7515971 2024-01-12 21:59:13 2024-01-13 10:40:36 2024-01-13 11:03:33 0:22:57 0:14:18 0:08:39 smithi main centos 8.stream rados/singleton/{all/max-pg-per-osd.from-primary mon_election/connectivity msgr-failures/none msgr/async-v2only objectstore/filestore-xfs rados supported-random-distro$/{centos_8}} 1
pass 7515972 2024-01-12 21:59:14 2024-01-13 10:40:37 2024-01-13 11:09:26 0:28:49 0:21:24 0:07:25 smithi main rhel 8.6 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{rhel_8} thrashers/none thrashosds-health workloads/set-chunks-read} 2
pass 7515973 2024-01-12 21:59:14 2024-01-13 10:42:18 2024-01-13 11:19:50 0:37:32 0:28:15 0:09:17 smithi main centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-inline/no 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
pass 7515974 2024-01-12 21:59:15 2024-01-13 10:42:48 2024-01-13 11:11:54 0:29:06 0:22:05 0:07:01 smithi main rhel 8.6 rados/objectstore/{backends/keyvaluedb supported-random-distro$/{rhel_8}} 1
pass 7515975 2024-01-12 21:59:16 2024-01-13 10:43:18 2024-01-13 11:18:28 0:35:10 0:26:55 0:08:15 smithi main centos 8.stream rados/valgrind-leaks/{1-start 2-inject-leak/osd centos_latest} 1
fail 7515976 2024-01-12 21:59:17 2024-01-13 10:43:19 2024-01-13 11:31:37 0:48:18 0:36:54 0:11:24 smithi main ubuntu 20.04 rados/cephadm/upgrade/{1-start-distro/1-start-ubuntu_20.04 2-repo_digest/repo_digest 3-upgrade/staggered 4-wait 5-upgrade-ls mon_election/connectivity} 2
Failure Reason:

Command failed on smithi121 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image docker.io/ceph/ceph:v15.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid b5155c52-b202-11ee-95ac-87774f69a715 -e sha1=445562ab4bc3ddfb386936119050695810860bcb -- bash -c \'ceph orch upgrade start --image quay.ceph.io/ceph-ci/ceph:$sha1 --daemon-types mon --hosts $(ceph orch ps | grep mgr.x | awk \'"\'"\'{print $2}\'"\'"\')\''

pass 7515977 2024-01-12 21:59:18 2024-01-13 10:43:19 2024-01-13 11:16:20 0:33:01 0:24:57 0:08:04 smithi main centos 8.stream rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/osd-delay objectstore/bluestore-low-osd-mem-target rados recovery-overrides/{more-active-recovery} supported-random-distro$/{centos_8} thrashers/none thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} 2
fail 7515978 2024-01-12 21:59:19 2024-01-13 10:43:20 2024-01-13 11:14:17 0:30:57 0:19:50 0:11:07 smithi main ubuntu 20.04 rados/cephadm/smoke-roleless/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-services/client-keyring 3-final} 2
Failure Reason:

reached maximum tries (121) after waiting for 120 seconds

pass 7515979 2024-01-12 21:59:20 2024-01-13 10:44:00 2024-01-13 11:07:04 0:23:04 0:15:21 0:07:43 smithi main rhel 8.6 rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/many msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{rhel_8} tasks/scrub_test} 2
pass 7515980 2024-01-12 21:59:20 2024-01-13 10:45:41 2024-01-13 11:06:44 0:21:03 0:10:58 0:10:05 smithi main centos 8.stream rados/singleton-nomsgr/{all/full-tiering mon_election/connectivity rados supported-random-distro$/{centos_8}} 1
pass 7515981 2024-01-12 21:59:21 2024-01-13 10:45:41 2024-01-13 11:20:46 0:35:05 0:25:55 0:09:10 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/classic task/test_nfs} 1
fail 7515982 2024-01-12 21:59:22 2024-01-13 10:46:01 2024-01-13 11:03:35 0:17:34 0:09:00 0:08:34 smithi main ubuntu 18.04 rados/cephadm/smoke/{0-nvme-loop distro/ubuntu_18.04 fixed-2 mon_election/connectivity start} 2
Failure Reason:

Command failed on smithi080 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:445562ab4bc3ddfb386936119050695810860bcb shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid da4ade3e-b202-11ee-95ac-87774f69a715 -- ceph orch daemon add osd smithi080:/dev/nvme4n1'

pass 7515983 2024-01-12 21:59:23 2024-01-13 10:46:02 2024-01-13 11:10:39 0:24:37 0:15:30 0:09:07 smithi main centos 8.stream rados/singleton/{all/max-pg-per-osd.from-replica mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-bitmap rados supported-random-distro$/{centos_8}} 1
pass 7515984 2024-01-12 21:59:24 2024-01-13 10:46:12 2024-01-13 11:19:36 0:33:24 0:27:31 0:05:53 smithi main rhel 8.6 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-partial-recovery} 3-scrub-overrides/{default} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/osd-delay msgr/async objectstore/filestore-xfs rados supported-random-distro$/{rhel_8} thrashers/pggrow thrashosds-health workloads/small-objects-balanced} 2
pass 7515985 2024-01-12 21:59:25 2024-01-13 10:46:33 2024-01-13 11:07:59 0:21:26 0:08:45 0:12:41 smithi main ubuntu 20.04 rados/multimon/{clusters/9 mon_election/classic msgr-failures/few msgr/async-v1only no_pools objectstore/bluestore-stupid rados supported-random-distro$/{ubuntu_latest} tasks/mon_clock_no_skews} 3
pass 7515986 2024-01-12 21:59:25 2024-01-13 10:48:03 2024-01-13 12:01:46 1:13:43 1:02:40 0:11:03 smithi main ubuntu 18.04 rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/mimic-v1only backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{ubuntu_18.04} mon_election/classic msgr-failures/osd-delay rados thrashers/none thrashosds-health workloads/radosbench} 3
fail 7515987 2024-01-12 21:59:26 2024-01-13 10:49:44 2024-01-13 11:17:19 0:27:35 0:17:39 0:09:56 smithi main centos 8.stream rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.stream_container_tools 1-bootstrap/16.2.5 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
Failure Reason:

Command failed on smithi178 with status 5: 'sudo systemctl stop ceph-a139bc18-b203-11ee-95ac-87774f69a715@mon.smithi178'

pass 7515988 2024-01-12 21:59:27 2024-01-13 10:49:54 2024-01-13 11:18:06 0:28:12 0:18:49 0:09:23 smithi main ubuntu 20.04 rados/mgr/{clusters/{2-node-mgr} debug/mgr mgr_ttl_cache/disable mon_election/classic objectstore/bluestore-low-osd-mem-target supported-random-distro$/{ubuntu_latest} tasks/progress} 2
pass 7515989 2024-01-12 21:59:28 2024-01-13 10:50:05 2024-01-13 11:13:25 0:23:20 0:12:30 0:10:50 smithi main ubuntu 20.04 rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/classic msgr-failures/few objectstore/bluestore-low-osd-mem-target rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{ubuntu_latest} thrashers/careful thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} 4
pass 7515990 2024-01-12 21:59:29 2024-01-13 10:51:15 2024-01-13 11:34:29 0:43:14 0:34:02 0:09:12 smithi main centos 8.stream rados/singleton-bluestore/{all/cephtool mon_election/classic msgr-failures/many msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{centos_8}} 1
fail 7515991 2024-01-12 21:59:30 2024-01-13 10:51:16 2024-01-13 11:14:33 0:23:17 0:14:21 0:08:56 smithi main centos 8.stream rados/cephadm/smoke-roleless/{0-distro/centos_8.stream_container_tools 0-nvme-loop 1-start 2-services/iscsi 3-final} 2
Failure Reason:

reached maximum tries (121) after waiting for 120 seconds

fail 7515992 2024-01-12 21:59:31 2024-01-13 10:51:26 2024-01-13 17:37:09 6:45:43 6:36:18 0:09:25 smithi main centos 8.stream rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-low-osd-mem-target rados tasks/rados_api_tests validater/valgrind} 2
Failure Reason:

Command failed (workunit test rados/test.sh) on smithi006 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=445562ab4bc3ddfb386936119050695810860bcb TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 ALLOW_TIMEOUTS=1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test.sh'

pass 7515993 2024-01-12 21:59:31 2024-01-13 10:51:36 2024-01-13 11:12:47 0:21:11 0:10:10 0:11:01 smithi main ubuntu 20.04 rados/singleton-nomsgr/{all/health-warnings mon_election/classic rados supported-random-distro$/{ubuntu_latest}} 1
pass 7515994 2024-01-12 21:59:32 2024-01-13 10:52:47 2024-01-13 11:37:15 0:44:28 0:34:09 0:10:19 smithi main centos 8.stream rados/cephadm/thrash/{0-distro/centos_8.stream_container_tools 1-start 2-thrash 3-tasks/rados_api_tests fixed-2 msgr/async root} 2
fail 7515995 2024-01-12 21:59:33 2024-01-13 10:53:07 2024-01-13 11:33:31 0:40:24 0:29:23 0:11:01 smithi main ubuntu 20.04 rados/monthrash/{ceph clusters/3-mons mon_election/classic msgr-failures/mon-delay msgr/async objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{ubuntu_latest} thrashers/one workloads/rados_mon_workunits} 2
Failure Reason:

Command failed (workunit test mon/caps.sh) on smithi133 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=445562ab4bc3ddfb386936119050695810860bcb TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/mon/caps.sh'

pass 7515996 2024-01-12 21:59:34 2024-01-13 10:53:48 2024-01-13 11:16:32 0:22:44 0:13:05 0:09:39 smithi main centos 8.stream rados/singleton/{all/mon-auth-caps mon_election/connectivity msgr-failures/many msgr/async-v1only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{centos_8}} 1
pass 7515997 2024-01-12 21:59:35 2024-01-13 10:53:48 2024-01-13 11:40:16 0:46:28 0:34:42 0:11:46 smithi main centos 8.stream rados/cephadm/with-work/{0-distro/centos_8.stream_container_tools fixed-2 mode/packaged mon_election/classic msgr/async start tasks/rados_api_tests} 2
pass 7515998 2024-01-12 21:59:36 2024-01-13 10:54:29 2024-01-13 11:29:30 0:35:01 0:23:24 0:11:37 smithi main centos 8.stream rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/osd-dispatch-delay msgr/async-v1only objectstore/bluestore-bitmap rados supported-random-distro$/{centos_8} thrashers/careful thrashosds-health workloads/small-objects-localized} 2
pass 7515999 2024-01-12 21:59:37 2024-01-13 10:56:29 2024-01-13 11:19:36 0:23:07 0:14:36 0:08:31 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/connectivity task/test_orch_cli} 1
fail 7516000 2024-01-12 21:59:37 2024-01-13 10:56:30 2024-01-13 11:22:24 0:25:54 0:17:46 0:08:08 smithi main rhel 8.6 rados/cephadm/osds/{0-distro/rhel_8.6_container_tools_rhel8 0-nvme-loop 1-start 2-ops/rm-zap-add} 2
Failure Reason:

reached maximum tries (121) after waiting for 120 seconds

pass 7516001 2024-01-12 21:59:38 2024-01-13 10:57:10 2024-01-13 11:19:19 0:22:09 0:11:00 0:11:09 smithi main ubuntu 20.04 rados/perf/{ceph mon_election/classic objectstore/bluestore-stupid openstack scheduler/dmclock_1Shard_16Threads settings/optimized ubuntu_latest workloads/fio_4M_rand_read} 1
pass 7516002 2024-01-12 21:59:39 2024-01-13 10:57:11 2024-01-13 11:35:21 0:38:10 0:27:12 0:10:58 smithi main ubuntu 20.04 rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/connectivity msgr-failures/osd-dispatch-delay objectstore/bluestore-stupid rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{ubuntu_latest} thrashers/pggrow thrashosds-health workloads/ec-rados-plugin=jerasure-k=4-m=2} 3
fail 7516003 2024-01-12 21:59:40 2024-01-13 10:57:11 2024-01-13 11:29:43 0:32:32 0:21:10 0:11:22 smithi main centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-inline/yes 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

Command failed on smithi153 with status 5: 'sudo systemctl stop ceph-58fc7394-b205-11ee-95ac-87774f69a715@mon.smithi153'

pass 7516004 2024-01-12 21:59:41 2024-01-13 10:58:22 2024-01-13 11:21:27 0:23:05 0:16:56 0:06:09 smithi main rhel 8.6 rados/singleton-nomsgr/{all/large-omap-object-warnings mon_election/connectivity rados supported-random-distro$/{rhel_8}} 1
pass 7516005 2024-01-12 21:59:42 2024-01-13 10:58:22 2024-01-13 11:18:30 0:20:08 0:10:27 0:09:41 smithi main ubuntu 20.04 rados/singleton/{all/mon-config-key-caps mon_election/classic msgr-failures/none msgr/async-v2only objectstore/bluestore-comp-snappy rados supported-random-distro$/{ubuntu_latest}} 1
fail 7516006 2024-01-12 21:59:42 2024-01-13 10:58:32 2024-01-13 11:21:05 0:22:33 0:09:49 0:12:44 smithi main ubuntu 20.04 rados/cephadm/smoke/{0-nvme-loop distro/ubuntu_20.04 fixed-2 mon_election/classic start} 2
Failure Reason:

Command failed on smithi039 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:445562ab4bc3ddfb386936119050695810860bcb shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 37e60242-b205-11ee-95ac-87774f69a715 -- ceph orch daemon add osd smithi039:/dev/nvme4n1'

fail 7516007 2024-01-12 21:59:43 2024-01-13 10:59:53 2024-01-13 11:27:26 0:27:33 0:19:33 0:08:00 smithi main rhel 8.6 rados/cephadm/smoke-roleless/{0-distro/rhel_8.6_container_tools_3.0 0-nvme-loop 1-start 2-services/mirror 3-final} 2
Failure Reason:

reached maximum tries (121) after waiting for 120 seconds

pass 7516008 2024-01-12 21:59:44 2024-01-13 10:59:53 2024-01-13 11:34:08 0:34:15 0:22:00 0:12:15 smithi main ubuntu 20.04 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/fastclose msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{ubuntu_latest} thrashers/default thrashosds-health workloads/small-objects} 2
pass 7516009 2024-01-12 21:59:45 2024-01-13 11:00:14 2024-01-13 11:36:54 0:36:40 0:24:20 0:12:20 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/classic task/test_orch_cli_mon} 5
pass 7516010 2024-01-12 21:59:46 2024-01-13 11:03:34 2024-01-13 11:44:59 0:41:25 0:34:59 0:06:26 smithi main rhel 8.6 rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/fast mon_election/classic msgr-failures/osd-delay objectstore/bluestore-comp-zstd rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{rhel_8} thrashers/default thrashosds-health workloads/ec-rados-plugin=clay-k=4-m=2} 2
pass 7516011 2024-01-12 21:59:47 2024-01-13 11:03:45 2024-01-13 14:38:00 3:34:15 3:24:55 0:09:20 smithi main ubuntu 20.04 rados/standalone/{supported-random-distro$/{ubuntu_latest} workloads/osd} 1
pass 7516012 2024-01-12 21:59:48 2024-01-13 11:03:45 2024-01-13 11:28:59 0:25:14 0:17:17 0:07:57 smithi main rhel 8.6 rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-hybrid rados supported-random-distro$/{rhel_8} tasks/libcephsqlite} 2
pass 7516013 2024-01-12 21:59:48 2024-01-13 11:05:26 2024-01-13 11:55:33 0:50:07 0:36:44 0:13:23 smithi main ubuntu 18.04 rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/mimic backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{ubuntu_18.04} mon_election/connectivity msgr-failures/fastclose rados thrashers/pggrow thrashosds-health workloads/rbd_cls} 3
pass 7516014 2024-01-12 21:59:49 2024-01-13 11:06:36 2024-01-13 11:43:25 0:36:49 0:26:07 0:10:42 smithi main ubuntu 20.04 rados/thrash-erasure-code-overwrites/{bluestore-bitmap ceph clusters/{fixed-2 openstack} fast/normal mon_election/connectivity msgr-failures/osd-delay rados recovery-overrides/{more-active-recovery} supported-random-distro$/{ubuntu_latest} thrashers/minsize_recovery thrashosds-health workloads/ec-snaps-few-objects-overwrites} 2
fail 7516015 2024-01-12 21:59:50 2024-01-13 11:07:07 2024-01-13 11:31:23 0:24:16 0:17:21 0:06:55 smithi main rhel 8.6 rados/cephadm/smoke-roleless/{0-distro/rhel_8.6_container_tools_rhel8 0-nvme-loop 1-start 2-services/nfs-ingress-rgw-bucket 3-final} 2
Failure Reason:

reached maximum tries (121) after waiting for 120 seconds

pass 7516016 2024-01-12 21:59:51 2024-01-13 11:07:37 2024-01-13 11:34:25 0:26:48 0:16:49 0:09:59 smithi main centos 8.stream rados/singleton/{all/mon-config-keys mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-comp-zlib rados supported-random-distro$/{centos_8}} 1
fail 7516017 2024-01-12 21:59:52 2024-01-13 11:07:38 2024-01-13 11:43:46 0:36:08 0:26:00 0:10:08 smithi main centos 8.stream rados/cephadm/dashboard/{0-distro/ignorelist_health task/test_e2e} 2
Failure Reason:

Command failed (workunit test cephadm/test_dashboard_e2e.sh) on smithi017 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=445562ab4bc3ddfb386936119050695810860bcb TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_dashboard_e2e.sh'

pass 7516018 2024-01-12 21:59:53 2024-01-13 11:08:08 2024-01-13 11:29:11 0:21:03 0:12:13 0:08:50 smithi main centos 8.stream rados/singleton-nomsgr/{all/lazy_omap_stats_output mon_election/classic rados supported-random-distro$/{centos_8}} 1
pass 7516019 2024-01-12 21:59:54 2024-01-13 11:08:08 2024-01-13 11:50:26 0:42:18 0:35:01 0:07:17 smithi main rhel 8.6 rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/osd-dispatch-delay objectstore/bluestore-stupid rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{rhel_8} thrashers/pggrow thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} 2
fail 7516020 2024-01-12 21:59:54 2024-01-13 11:09:09 2024-01-13 11:40:38 0:31:29 0:21:04 0:10:25 smithi main centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-inline/no 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

Command failed on smithi086 with status 5: 'sudo systemctl stop ceph-c3c8e7f6-b206-11ee-95ac-87774f69a715@mon.smithi086'

pass 7516021 2024-01-12 21:59:55 2024-01-13 11:09:29 2024-01-13 11:47:53 0:38:24 0:27:19 0:11:05 smithi main centos 8.stream rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-active-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{centos_8} thrashers/mapgap thrashosds-health workloads/snaps-few-objects-balanced} 2
fail 7516022 2024-01-12 21:59:56 2024-01-13 11:10:40 2024-01-13 11:31:52 0:21:12 0:09:00 0:12:12 smithi main centos 8.stream rados/cephadm/smoke/{0-nvme-loop distro/centos_8.stream_container_tools fixed-2 mon_election/connectivity start} 2
Failure Reason:

Command failed on smithi026 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:445562ab4bc3ddfb386936119050695810860bcb shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid b3ef6cb0-b206-11ee-95ac-87774f69a715 -- ceph orch daemon add osd smithi026:/dev/nvme4n1'

pass 7516023 2024-01-12 21:59:57 2024-01-13 11:12:51 2024-01-13 11:56:26 0:43:35 0:35:51 0:07:44 smithi main rhel 8.6 rados/objectstore/{backends/objectcacher-stress supported-random-distro$/{rhel_8}} 1
pass 7516024 2024-01-12 21:59:58 2024-01-13 11:13:31 2024-01-13 11:35:21 0:21:50 0:14:16 0:07:34 smithi main rhel 8.6 rados/cephadm/smoke-singlehost/{0-distro$/{rhel_8.6_container_tools_rhel8} 1-start 2-services/rgw 3-final} 1
pass 7516025 2024-01-12 21:59:59 2024-01-13 11:13:31 2024-01-13 11:33:27 0:19:56 0:09:51 0:10:05 smithi main ubuntu 20.04 rados/singleton/{all/mon-config mon_election/classic msgr-failures/many msgr/async-v1only objectstore/bluestore-comp-zstd rados supported-random-distro$/{ubuntu_latest}} 1
pass 7516026 2024-01-12 21:59:59 2024-01-13 11:13:32 2024-01-13 11:34:48 0:21:16 0:10:52 0:10:24 smithi main centos 8.stream rados/multimon/{clusters/21 mon_election/connectivity msgr-failures/many msgr/async-v2only no_pools objectstore/filestore-xfs rados supported-random-distro$/{centos_8} tasks/mon_clock_with_skews} 3
pass 7516027 2024-01-12 22:00:00 2024-01-13 11:13:32 2024-01-13 12:10:31 0:56:59 0:45:39 0:11:20 smithi main centos 8.stream rados/cephadm/thrash/{0-distro/centos_8.stream_container_tools 1-start 2-thrash 3-tasks/radosbench fixed-2 msgr/async-v1only root} 2
pass 7516028 2024-01-12 22:00:01 2024-01-13 11:13:32 2024-01-13 12:19:36 1:06:04 0:55:02 0:11:02 smithi main centos 8.stream rados/cephadm/upgrade/{1-start-distro/1-start-centos_8.stream_container-tools 2-repo_digest/defaut 3-upgrade/staggered 4-wait 5-upgrade-ls mon_election/classic} 2
pass 7516029 2024-01-12 22:00:02 2024-01-13 11:14:23 2024-01-13 11:36:37 0:22:14 0:12:27 0:09:47 smithi main ubuntu 20.04 rados/perf/{ceph mon_election/connectivity objectstore/bluestore-basic-min-osd-mem-target openstack scheduler/dmclock_default_shards settings/optimized ubuntu_latest workloads/fio_4M_rand_rw} 1
pass 7516030 2024-01-12 22:00:03 2024-01-13 11:14:33 2024-01-13 11:39:37 0:25:04 0:15:41 0:09:23 smithi main centos 8.stream rados/singleton-nomsgr/{all/librados_hello_world mon_election/connectivity rados supported-random-distro$/{centos_8}} 1
pass 7516031 2024-01-12 22:00:04 2024-01-13 11:14:44 2024-01-13 11:38:33 0:23:49 0:11:24 0:12:25 smithi main ubuntu 20.04 rados/mgr/{clusters/{2-node-mgr} debug/mgr mgr_ttl_cache/enable mon_election/connectivity objectstore/bluestore-stupid supported-random-distro$/{ubuntu_latest} tasks/prometheus} 2
pass 7516032 2024-01-12 22:00:05 2024-01-13 11:16:24 2024-01-13 11:41:57 0:25:33 0:14:25 0:11:08 smithi main centos 8.stream rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/connectivity msgr-failures/osd-delay objectstore/bluestore-stupid rados recovery-overrides/{more-async-recovery} supported-random-distro$/{centos_8} thrashers/default thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} 4
pass 7516033 2024-01-12 22:00:05 2024-01-13 11:16:35 2024-01-13 11:54:51 0:38:16 0:28:24 0:09:52 smithi main centos 8.stream rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-active-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/osd-delay msgr/async-v1only objectstore/bluestore-comp-zlib rados supported-random-distro$/{centos_8} thrashers/morepggrow thrashosds-health workloads/snaps-few-objects-localized} 2
pass 7516034 2024-01-12 22:00:06 2024-01-13 11:17:25 2024-01-13 11:59:35 0:42:10 0:34:17 0:07:53 smithi main rhel 8.6 rados/cephadm/with-work/{0-distro/rhel_8.6_container_tools_3.0 fixed-2 mode/root mon_election/connectivity msgr/async-v1only start tasks/rados_python} 2
fail 7516035 2024-01-12 22:00:07 2024-01-13 11:18:16 2024-01-13 11:47:36 0:29:20 0:18:54 0:10:26 smithi main centos 8.stream rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-stupid rados tasks/rados_cls_all validater/lockdep} 2
Failure Reason:

"2024-01-13T11:43:13.885853+0000 mon.a (mon.0) 506 : cluster [WRN] Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)" in cluster log

pass 7516036 2024-01-12 22:00:08 2024-01-13 11:18:27 2024-01-13 11:39:43 0:21:16 0:12:08 0:09:08 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/connectivity task/test_adoption} 1
pass 7516037 2024-01-12 22:00:09 2024-01-13 11:54:59 1574 smithi main ubuntu 20.04 rados/monthrash/{ceph clusters/9-mons mon_election/connectivity msgr-failures/few msgr/async-v1only objectstore/bluestore-stupid rados supported-random-distro$/{ubuntu_latest} thrashers/sync-many workloads/snaps-few-objects} 2
fail 7516038 2024-01-12 22:00:10 2024-01-13 11:18:58 2024-01-13 11:49:03 0:30:05 0:18:40 0:11:25 smithi main ubuntu 18.04 rados/cephadm/osds/{0-distro/ubuntu_18.04 0-nvme-loop 1-start 2-ops/rm-zap-flag} 2
Failure Reason:

reached maximum tries (121) after waiting for 120 seconds

pass 7516039 2024-01-12 22:00:10 2024-01-13 11:19:28 2024-01-13 12:04:44 0:45:16 0:36:13 0:09:03 smithi main centos 8.stream rados/singleton/{all/osd-backfill mon_election/connectivity msgr-failures/none msgr/async-v2only objectstore/bluestore-hybrid rados supported-random-distro$/{centos_8}} 1
fail 7516040 2024-01-12 22:00:11 2024-01-13 11:19:38 2024-01-13 11:47:02 0:27:24 0:18:47 0:08:37 smithi main ubuntu 18.04 rados/cephadm/smoke-roleless/{0-distro/ubuntu_18.04 0-nvme-loop 1-start 2-services/nfs-ingress-rgw-user 3-final} 2
Failure Reason:

reached maximum tries (121) after waiting for 120 seconds

pass 7516041 2024-01-12 22:00:12 2024-01-13 11:19:39 2024-01-13 12:22:14 1:02:35 0:52:25 0:10:10 smithi main ubuntu 18.04 rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/nautilus-v1only backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{ubuntu_18.04} mon_election/classic msgr-failures/few rados thrashers/careful thrashosds-health workloads/snaps-few-objects} 3
pass 7516042 2024-01-12 22:00:13 2024-01-13 11:20:49 2024-01-13 11:48:58 0:28:09 0:17:56 0:10:13 smithi main ubuntu 20.04 rados/singleton-nomsgr/{all/msgr mon_election/classic rados supported-random-distro$/{ubuntu_latest}} 1
pass 7516043 2024-01-12 22:00:14 2024-01-13 11:21:00 2024-01-13 12:00:34 0:39:34 0:32:01 0:07:33 smithi main rhel 8.6 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/osd-dispatch-delay msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{rhel_8} thrashers/none thrashosds-health workloads/snaps-few-objects} 2
pass 7516044 2024-01-12 22:00:15 2024-01-13 11:21:00 2024-01-13 11:45:19 0:24:19 0:13:25 0:10:54 smithi main centos 8.stream rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/classic msgr-failures/fastclose objectstore/filestore-xfs rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{centos_8} thrashers/careful thrashosds-health workloads/ec-rados-plugin=lrc-k=4-m=2-l=3} 3
fail 7516045 2024-01-12 22:00:15 2024-01-13 11:21:30 2024-01-13 11:49:25 0:27:55 0:17:06 0:10:49 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/classic task/test_cephadm} 1
Failure Reason:

Command failed (workunit test cephadm/test_cephadm.sh) on smithi149 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=445562ab4bc3ddfb386936119050695810860bcb TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_cephadm.sh'

pass 7516046 2024-01-12 22:00:16 2024-01-13 11:22:31 2024-01-13 12:02:00 0:39:29 0:27:26 0:12:03 smithi main centos 8.stream rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/many msgr/async-v1only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{centos_8} tasks/rados_api_tests} 2
fail 7516047 2024-01-12 22:00:17 2024-01-13 11:24:52 2024-01-13 11:46:50 0:21:58 0:12:36 0:09:22 smithi main rhel 8.6 rados/cephadm/smoke/{0-nvme-loop distro/rhel_8.6_container_tools_3.0 fixed-2 mon_election/classic start} 2
Failure Reason:

Command failed on smithi084 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:445562ab4bc3ddfb386936119050695810860bcb shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid ce7f62b8-b208-11ee-95ac-87774f69a715 -- ceph orch daemon add osd smithi084:/dev/nvme4n1'

pass 7516048 2024-01-12 22:00:18 2024-01-13 11:27:32 2024-01-13 12:21:18 0:53:46 0:43:35 0:10:11 smithi main centos 8.stream rados/singleton/{all/osd-recovery-incomplete mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{centos_8}} 1
fail 7516049 2024-01-12 22:00:19 2024-01-13 11:27:33 2024-01-13 12:01:45 0:34:12 0:20:27 0:13:45 smithi main ubuntu 20.04 rados/cephadm/smoke-roleless/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-services/nfs-ingress 3-final} 2
Failure Reason:

reached maximum tries (121) after waiting for 120 seconds

fail 7516050 2024-01-12 22:00:20 2024-01-13 11:29:03 2024-01-13 12:00:43 0:31:40 0:21:24 0:10:16 smithi main centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-inline/yes 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

Command failed on smithi123 with status 5: 'sudo systemctl stop ceph-a3276948-b209-11ee-95ac-87774f69a715@mon.smithi123'

pass 7516051 2024-01-12 22:00:21 2024-01-13 11:29:34 2024-01-13 12:07:04 0:37:30 0:27:01 0:10:29 smithi main centos 8.stream rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/normal mon_election/connectivity msgr-failures/osd-dispatch-delay objectstore/bluestore-hybrid rados recovery-overrides/{default} supported-random-distro$/{centos_8} thrashers/fastread thrashosds-health workloads/ec-rados-plugin=jerasure-k=2-m=1} 2
pass 7516052 2024-01-12 22:00:21 2024-01-13 11:29:54 2024-01-13 11:46:53 0:16:59 0:07:37 0:09:22 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/connectivity task/test_cephadm_repos} 1
pass 7516053 2024-01-12 22:00:22 2024-01-13 11:29:55 2024-01-13 12:02:59 0:33:04 0:21:33 0:11:31 smithi main centos 8.stream rados/singleton-nomsgr/{all/multi-backfill-reject mon_election/connectivity rados supported-random-distro$/{centos_8}} 2
pass 7516054 2024-01-12 22:00:23 2024-01-13 11:31:25 2024-01-13 11:59:00 0:27:35 0:16:20 0:11:15 smithi main centos 8.stream rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-active-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/fastclose msgr/async objectstore/bluestore-hybrid rados supported-random-distro$/{centos_8} thrashers/pggrow thrashosds-health workloads/write_fadvise_dontneed} 2
fail 7516055 2024-01-12 22:00:24 2024-01-13 11:31:46 2024-01-13 12:09:05 0:37:19 0:25:21 0:11:58 smithi main ubuntu 18.04 rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/nautilus-v2only backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{ubuntu_18.04} mon_election/connectivity msgr-failures/osd-delay rados thrashers/default thrashosds-health workloads/test_rbd_api} 3
Failure Reason:

No module named 'tasks.thrashosds'

fail 7516056 2024-01-12 22:00:25 2024-01-13 11:33:36 2024-01-13 13:18:43 1:45:07 1:35:38 0:09:29 smithi main centos 8.stream rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.stream_container_tools 1-bootstrap/octopus 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
Failure Reason:

Command failed on smithi133 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image docker.io/ceph/ceph:v15 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 9a23abc2-b209-11ee-95ac-87774f69a715 -e sha1=445562ab4bc3ddfb386936119050695810860bcb -- bash -c \'ceph versions | jq -e \'"\'"\'.overall | length == 1\'"\'"\'\''

pass 7516057 2024-01-12 22:00:26 2024-01-13 11:33:37 2024-01-13 12:01:00 0:27:23 0:18:30 0:08:53 smithi main centos 8.stream rados/singleton/{all/osd-recovery mon_election/connectivity msgr-failures/many msgr/async-v1only objectstore/bluestore-stupid rados supported-random-distro$/{centos_8}} 1
pass 7516058 2024-01-12 22:00:26 2024-01-13 11:34:07 2024-01-13 11:55:56 0:21:49 0:12:13 0:09:36 smithi main ubuntu 20.04 rados/perf/{ceph mon_election/classic objectstore/bluestore-bitmap openstack scheduler/wpq_default_shards settings/optimized ubuntu_latest workloads/fio_4M_rand_write} 1
pass 7516059 2024-01-12 22:00:27 2024-01-13 11:34:07 2024-01-13 12:12:11 0:38:04 0:26:00 0:12:04 smithi main ubuntu 20.04 rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/fastclose objectstore/filestore-xfs rados recovery-overrides/{more-async-recovery} supported-random-distro$/{ubuntu_latest} thrashers/careful thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} 2
fail 7516060 2024-01-12 22:00:28 2024-01-13 11:34:18 2024-01-13 11:59:24 0:25:06 0:14:56 0:10:10 smithi main centos 8.stream rados/cephadm/smoke-roleless/{0-distro/centos_8.stream_container_tools 0-nvme-loop 1-start 2-services/nfs-ingress2 3-final} 2
Failure Reason:

reached maximum tries (121) after waiting for 120 seconds

pass 7516061 2024-01-12 22:00:29 2024-01-13 11:34:28 2024-01-13 12:14:17 0:39:49 0:30:13 0:09:36 smithi main centos 8.stream rados/cephadm/thrash/{0-distro/centos_8.stream_container_tools 1-start 2-thrash 3-tasks/small-objects fixed-2 msgr/async-v2only root} 2
pass 7516062 2024-01-12 22:00:30 2024-01-13 11:34:59 2024-01-13 13:30:18 1:55:19 1:48:06 0:07:13 smithi main rhel 8.6 rados/standalone/{supported-random-distro$/{rhel_8} workloads/scrub} 1
fail 7516063 2024-01-12 22:00:31 2024-01-13 11:34:59 2024-01-13 12:05:26 0:30:27 0:22:30 0:07:57 smithi main rhel 8.6 rados/cephadm/with-work/{0-distro/rhel_8.6_container_tools_rhel8 fixed-2 mode/packaged mon_election/classic msgr/async-v2only start tasks/rados_api_tests} 2
Failure Reason:

No module named 'tasks.workunit'