Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
fail 7525000 2024-01-21 16:36:30 2024-01-21 18:42:23 2024-01-22 00:49:59 6:07:36 5:12:08 0:55:28 smithi main centos 8.stream rados/objectstore/{backends/objectstore supported-random-distro$/{centos_8}} 1
Failure Reason:

Command failed on smithi204 with status 1: 'sudo TESTDIR=/home/ubuntu/cephtest bash -c \'mkdir $TESTDIR/archive/ostest && cd $TESTDIR/archive/ostest && ulimit -Sn 16384 && CEPH_ARGS="--no-log-to-stderr --log-file $TESTDIR/archive/ceph_test_objectstore.log --debug-filestore 20 --debug-bluestore 20" ceph_test_objectstore --gtest_filter=-*/3 --gtest_catch_exceptions=0\''

pass 7525001 2024-01-21 16:36:31 2024-01-21 18:44:04 2024-01-21 19:20:33 0:36:29 0:26:21 0:10:08 smithi main centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-inline/no 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
pass 7525002 2024-01-21 16:36:32 2024-01-21 18:44:44 2024-01-21 19:30:43 0:45:59 0:35:33 0:10:26 smithi main ubuntu 18.04 rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/nautilus-v1only backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{ubuntu_18.04} mon_election/classic msgr-failures/fastclose rados thrashers/default thrashosds-health workloads/rbd_cls} 3
pass 7525003 2024-01-21 16:36:33 2024-01-21 18:45:05 2024-01-21 19:22:55 0:37:50 0:27:09 0:10:41 smithi main centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-inline/yes 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
fail 7525004 2024-01-21 16:36:33 2024-01-21 18:45:05 2024-01-21 19:14:04 0:28:59 0:20:34 0:08:25 smithi main rhel 8.6 rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{rhel_8} tasks/rados_cls_all} 2
Failure Reason:

"2024-01-21T19:10:30.665452+0000 mon.a (mon.0) 478 : cluster [WRN] Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)" in cluster log

fail 7525005 2024-01-21 16:36:34 2024-01-21 18:46:36 2024-01-21 19:18:11 0:31:35 0:21:11 0:10:24 smithi main centos 8.stream rados/cephadm/dashboard/{0-distro/centos_8.stream_container_tools task/test_e2e} 2
Failure Reason:

Command failed (workunit test cephadm/test_dashboard_e2e.sh) on smithi119 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=ea310589757b987e8ba9e9ba96dfa9a6f9c1e8ec TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_dashboard_e2e.sh'

pass 7525006 2024-01-21 16:36:35 2024-01-21 18:46:56 2024-01-21 19:24:54 0:37:58 0:28:30 0:09:28 smithi main centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-inline/no 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
pass 7525007 2024-01-21 16:36:36 2024-01-21 18:47:47 2024-01-21 19:27:04 0:39:17 0:27:18 0:11:59 smithi main centos 8.stream rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.stream_container_tools 1-bootstrap/16.2.4 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
fail 7525008 2024-01-21 16:36:37 2024-01-21 18:49:57 2024-01-21 19:11:46 0:21:49 0:11:32 0:10:17 smithi main centos 8.stream rados/cephadm/rbd_iscsi/{base/install cluster/{fixed-3 openstack} pool/datapool supported-random-distro$/{centos_8} workloads/ceph_iscsi} 3
Failure Reason:

'package_manager_version'

fail 7525009 2024-01-21 16:36:37 2024-01-21 18:50:38 2024-01-21 19:19:54 0:29:16 0:18:55 0:10:21 smithi main centos 8.stream rados/cephadm/upgrade/{1-start-distro/1-start-centos_8.stream_container-tools 2-repo_digest/defaut 3-upgrade/simple 4-wait 5-upgrade-ls mon_election/classic} 2
Failure Reason:

Command failed on smithi107 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image docker.io/ceph/ceph:v15.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 0e2fe10c-b890-11ee-95b0-87774f69a715 -e sha1=ea310589757b987e8ba9e9ba96dfa9a6f9c1e8ec -- bash -c \'ceph versions | jq -e \'"\'"\'.overall | length == 1\'"\'"\'\''

pass 7525010 2024-01-21 16:36:38 2024-01-21 18:50:38 2024-01-21 19:17:27 0:26:49 0:15:04 0:11:45 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/connectivity task/test_cephadm} 1
pass 7525011 2024-01-21 16:36:39 2024-01-21 18:51:49 2024-01-21 19:29:17 0:37:28 0:27:19 0:10:09 smithi main centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-inline/yes 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
pass 7525012 2024-01-21 16:36:40 2024-01-21 18:51:59 2024-01-21 19:16:37 0:24:38 0:17:11 0:07:27 smithi main rhel 8.6 rados/cephadm/osds/{0-distro/rhel_8.6_container_tools_rhel8 0-nvme-loop 1-start 2-ops/rmdir-reactivate} 2
pass 7525013 2024-01-21 16:36:41 2024-01-21 18:52:50 2024-01-21 19:27:54 0:35:04 0:26:56 0:08:08 smithi main centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-inline/no 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
pass 7525014 2024-01-21 16:36:42 2024-01-21 18:52:50 2024-01-21 19:57:18 1:04:28 0:52:18 0:12:10 smithi main ubuntu 20.04 rados/cephadm/upgrade/{1-start-distro/1-start-ubuntu_20.04-15.2.9 2-repo_digest/repo_digest 3-upgrade/staggered 4-wait 5-upgrade-ls mon_election/connectivity} 2
pass 7525015 2024-01-21 16:36:42 2024-01-21 18:53:40 2024-01-21 19:28:56 0:35:16 0:23:13 0:12:03 smithi main ubuntu 20.04 rados/cephadm/smoke-roleless/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-services/rgw-ingress 3-final} 2
pass 7525016 2024-01-21 16:36:43 2024-01-21 18:54:41 2024-01-21 19:29:58 0:35:17 0:24:51 0:10:26 smithi main centos 8.stream rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.stream_container_tools 1-bootstrap/16.2.5 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
fail 7525017 2024-01-21 16:36:44 2024-01-21 18:54:51 2024-01-21 19:22:12 0:27:21 0:17:40 0:09:41 smithi main centos 8.stream rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-comp-lz4 rados tasks/rados_cls_all validater/lockdep} 2
Failure Reason:

"2024-01-21T19:19:01.780649+0000 mon.a (mon.0) 471 : cluster [WRN] Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)" in cluster log

pass 7525018 2024-01-21 16:36:45 2024-01-21 18:55:12 2024-01-21 19:34:08 0:38:56 0:27:13 0:11:43 smithi main centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-inline/yes 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
fail 7525019 2024-01-21 16:36:46 2024-01-21 18:56:52 2024-01-21 19:18:11 0:21:19 0:11:59 0:09:20 smithi main centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-inline/no 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

Command failed on smithi150 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image docker.io/ceph/ceph:v16.2.4 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 5abc6314-b891-11ee-95b0-87774f69a715 -- ceph orch daemon add osd smithi150:vg_nvme/lv_4'

pass 7525020 2024-01-21 16:36:46 2024-01-21 18:56:53 2024-01-21 19:36:55 0:40:02 0:30:38 0:09:24 smithi main ubuntu 20.04 rados/cephadm/upgrade/{1-start-distro/1-start-ubuntu_20.04 2-repo_digest/defaut 3-upgrade/simple 4-wait 5-upgrade-ls mon_election/classic} 2
pass 7525021 2024-01-21 16:36:47 2024-01-21 18:57:03 2024-01-21 19:42:15 0:45:12 0:33:17 0:11:55 smithi main ubuntu 18.04 rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/mimic backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{ubuntu_18.04} mon_election/connectivity msgr-failures/few rados thrashers/morepggrow thrashosds-health workloads/test_rbd_api} 3
fail 7525022 2024-01-21 16:36:48 2024-01-21 18:58:44 2024-01-21 19:26:59 0:28:15 0:15:43 0:12:32 smithi main ubuntu 18.04 rados/cephadm/osds/{0-distro/ubuntu_18.04 0-nvme-loop 1-start 2-ops/rm-zap-add} 2
Failure Reason:

Command failed on smithi043 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:ea310589757b987e8ba9e9ba96dfa9a6f9c1e8ec shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 7308bef4-b891-11ee-95b0-87774f69a715 -- bash -c \'set -e\nset -x\nceph orch ps\nceph orch device ls\nDEVID=$(ceph device ls | grep osd.1 | awk \'"\'"\'{print $1}\'"\'"\')\nHOST=$(ceph orch device ls | grep $DEVID | awk \'"\'"\'{print $1}\'"\'"\')\nDEV=$(ceph orch device ls | grep $DEVID | awk \'"\'"\'{print $2}\'"\'"\')\necho "host $HOST, dev $DEV, devid $DEVID"\nceph orch osd rm 1\nwhile ceph orch osd rm status | grep ^1 ; do sleep 5 ; done\nceph orch device zap $HOST $DEV --force\nceph orch daemon add osd $HOST:$DEV\nwhile ! ceph osd dump | grep osd.1 | grep up ; do sleep 5 ; done\n\''

pass 7525023 2024-01-21 16:36:49 2024-01-21 19:01:35 2024-01-21 19:26:50 0:25:15 0:15:35 0:09:40 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/classic task/test_cephadm} 1
pass 7525024 2024-01-21 16:36:50 2024-01-21 19:01:45 2024-01-21 19:39:18 0:37:33 0:28:47 0:08:46 smithi main centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-inline/yes 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
fail 7525025 2024-01-21 16:36:51 2024-01-21 19:02:05 2024-01-21 19:29:23 0:27:18 0:17:43 0:09:35 smithi main centos 8.stream rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.stream_container_tools 1-bootstrap/octopus 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
Failure Reason:

Command failed on smithi123 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image docker.io/ceph/ceph:v15 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 8e809a3a-b891-11ee-95b0-87774f69a715 -e sha1=ea310589757b987e8ba9e9ba96dfa9a6f9c1e8ec -- bash -c \'ceph versions | jq -e \'"\'"\'.overall | length == 1\'"\'"\'\''

pass 7525026 2024-01-21 16:36:51 2024-01-21 19:02:26 2024-01-21 19:26:26 0:24:00 0:13:42 0:10:18 smithi main centos 8.stream rados/cephadm/smoke-roleless/{0-distro/centos_8.stream_container_tools 0-nvme-loop 1-start 2-services/nfs-ingress-rgw-bucket 3-final} 2
fail 7525027 2024-01-21 16:36:52 2024-01-21 19:03:16 2024-01-21 19:54:29 0:51:13 0:44:04 0:07:09 smithi main rhel 8.6 rados/cephadm/smoke-roleless/{0-distro/rhel_8.6_container_tools_3.0 0-nvme-loop 1-start 2-services/nfs-ingress-rgw-user 3-final} 2
Failure Reason:

reached maximum tries (301) after waiting for 300 seconds

pass 7525028 2024-01-21 16:36:53 2024-01-21 19:03:27 2024-01-21 19:40:50 0:37:23 0:27:03 0:10:20 smithi main centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-inline/no 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
fail 7525029 2024-01-21 16:36:54 2024-01-21 19:03:27 2024-01-21 19:48:02 0:44:35 0:35:45 0:08:50 smithi main rhel 8.6 rados/cephadm/smoke-roleless/{0-distro/rhel_8.6_container_tools_rhel8 0-nvme-loop 1-start 2-services/nfs-ingress 3-final} 2
Failure Reason:

reached maximum tries (301) after waiting for 300 seconds

fail 7525030 2024-01-21 16:36:55 2024-01-21 19:04:18 2024-01-21 19:55:06 0:50:48 0:40:55 0:09:53 smithi main ubuntu 18.04 rados/cephadm/smoke-roleless/{0-distro/ubuntu_18.04 0-nvme-loop 1-start 2-services/nfs-ingress2 3-final} 2
Failure Reason:

reached maximum tries (301) after waiting for 300 seconds

fail 7525031 2024-01-21 16:36:56 2024-01-21 19:05:28 2024-01-21 19:44:36 0:39:08 0:29:41 0:09:27 smithi main centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-inline/yes 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

timeout expired in wait_until_healthy

pass 7525032 2024-01-21 16:36:56 2024-01-21 19:05:29 2024-01-21 19:37:24 0:31:55 0:22:10 0:09:45 smithi main centos 8.stream rados/cephadm/dashboard/{0-distro/centos_8.stream_container_tools task/test_e2e} 2
fail 7525033 2024-01-21 16:36:57 2024-01-21 19:06:29 2024-01-21 19:36:25 0:29:56 0:20:12 0:09:44 smithi main centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-inline/no 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

Command failed on smithi195 with status 5: 'sudo systemctl stop ceph-b4b345a8-b892-11ee-95b0-87774f69a715@mon.smithi195'

fail 7525034 2024-01-21 16:36:58 2024-01-21 19:07:20 2024-01-21 19:38:46 0:31:26 0:17:08 0:14:18 smithi main centos 8.stream rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.stream_container_tools 1-bootstrap/16.2.4 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
Failure Reason:

Command failed on smithi105 with status 5: 'sudo systemctl stop ceph-da661c3a-b892-11ee-95b0-87774f69a715@mon.smithi105'

fail 7525035 2024-01-21 16:36:59 2024-01-21 19:11:51 2024-01-21 19:39:40 0:27:49 0:20:00 0:07:49 smithi main rhel 8.6 rados/cephadm/rbd_iscsi/{base/install cluster/{fixed-3 openstack} pool/datapool supported-random-distro$/{rhel_8} workloads/ceph_iscsi} 3
Failure Reason:

'package_manager_version'

fail 7525036 2024-01-21 16:37:00 2024-01-21 19:12:11 2024-01-21 19:42:37 0:30:26 0:19:33 0:10:53 smithi main ubuntu 20.04 rados/cephadm/upgrade/{1-start-distro/1-start-ubuntu_20.04-15.2.9 2-repo_digest/defaut 3-upgrade/simple 4-wait 5-upgrade-ls mon_election/classic} 2
Failure Reason:

timeout expired in wait_until_healthy

fail 7525037 2024-01-21 16:37:00 2024-01-21 19:12:11 2024-01-21 20:00:25 0:48:14 0:41:35 0:06:39 smithi main rhel 8.6 rados/cephadm/smoke-roleless/{0-distro/rhel_8.6_container_tools_3.0 0-nvme-loop 1-start 2-services/rgw-ingress 3-final} 2
Failure Reason:

reached maximum tries (301) after waiting for 300 seconds

fail 7525038 2024-01-21 16:37:01 2024-01-21 19:12:12 2024-01-21 19:38:27 0:26:15 0:16:04 0:10:11 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/connectivity task/test_cephadm} 1
Failure Reason:

Command failed (workunit test cephadm/test_cephadm.sh) on smithi125 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=ea310589757b987e8ba9e9ba96dfa9a6f9c1e8ec TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_cephadm.sh'

fail 7525039 2024-01-21 16:37:02 2024-01-21 19:13:12 2024-01-21 19:43:24 0:30:12 0:20:42 0:09:30 smithi main centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-inline/yes 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

Command failed on smithi167 with status 5: 'sudo systemctl stop ceph-ad6fc428-b893-11ee-95b0-87774f69a715@mon.smithi167'

pass 7525040 2024-01-21 16:37:03 2024-01-21 19:14:13 2024-01-21 20:07:50 0:53:37 0:43:53 0:09:44 smithi main ubuntu 18.04 rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/luminous backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{ubuntu_18.04} mon_election/connectivity msgr-failures/few rados thrashers/morepggrow thrashosds-health workloads/cache-snaps} 3
fail 7525041 2024-01-21 16:37:04 2024-01-21 19:14:23 2024-01-21 19:48:24 0:34:01 0:20:54 0:13:07 smithi main centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-inline/no 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

Command failed on smithi029 with status 5: 'sudo systemctl stop ceph-2d7ab6b4-b894-11ee-95b0-87774f69a715@mon.smithi029'

pass 7525042 2024-01-21 16:37:05 2024-01-21 19:16:44 2024-01-21 20:30:32 1:13:48 1:03:24 0:10:24 smithi main ubuntu 18.04 rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/mimic-v1only backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{ubuntu_18.04} mon_election/classic msgr-failures/osd-delay rados thrashers/none thrashosds-health workloads/radosbench} 3
fail 7525043 2024-01-21 16:37:05 2024-01-21 19:16:54 2024-01-21 19:44:58 0:28:04 0:17:26 0:10:38 smithi main centos 8.stream rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.stream_container_tools 1-bootstrap/16.2.5 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
Failure Reason:

Command failed on smithi131 with status 5: 'sudo systemctl stop ceph-e065a956-b893-11ee-95b0-87774f69a715@mon.smithi131'

fail 7525044 2024-01-21 16:37:06 2024-01-21 19:16:55 2024-01-21 19:47:24 0:30:29 0:20:12 0:10:17 smithi main centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-inline/yes 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

Command failed on smithi115 with status 5: 'sudo systemctl stop ceph-3c4219b2-b894-11ee-95b0-87774f69a715@mon.smithi115'

pass 7525045 2024-01-21 16:37:07 2024-01-21 19:17:35 2024-01-21 19:41:22 0:23:47 0:13:50 0:09:57 smithi main centos 8.stream rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-hybrid rados supported-random-distro$/{centos_8} tasks/libcephsqlite} 2
fail 7525046 2024-01-21 16:37:08 2024-01-21 19:18:16 2024-01-21 19:59:25 0:41:09 0:34:35 0:06:34 smithi main rhel 8.6 rados/cephadm/smoke-roleless/{0-distro/rhel_8.6_container_tools_rhel8 0-nvme-loop 1-start 2-services/nfs-ingress-rgw-bucket 3-final} 2
Failure Reason:

reached maximum tries (301) after waiting for 300 seconds

fail 7525047 2024-01-21 16:37:09 2024-01-21 19:18:16 2024-01-21 19:49:03 0:30:47 0:20:15 0:10:32 smithi main centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-inline/no 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

Command failed on smithi148 with status 5: 'sudo systemctl stop ceph-7072b886-b894-11ee-95b0-87774f69a715@mon.smithi148'

pass 7525048 2024-01-21 16:37:10 2024-01-21 19:19:57 2024-01-21 19:48:25 0:28:28 0:17:24 0:11:04 smithi main centos 8.stream rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-stupid rados tasks/rados_cls_all validater/lockdep} 2
fail 7525049 2024-01-21 16:37:11 2024-01-21 19:20:37 2024-01-21 20:13:10 0:52:33 0:42:20 0:10:13 smithi main ubuntu 18.04 rados/cephadm/smoke-roleless/{0-distro/ubuntu_18.04 0-nvme-loop 1-start 2-services/nfs-ingress-rgw-user 3-final} 2
Failure Reason:

reached maximum tries (301) after waiting for 300 seconds

pass 7525050 2024-01-21 16:37:11 2024-01-21 19:21:48 2024-01-21 20:25:07 1:03:19 0:51:22 0:11:57 smithi main ubuntu 18.04 rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/nautilus-v1only backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{ubuntu_18.04} mon_election/classic msgr-failures/few rados thrashers/careful thrashosds-health workloads/snaps-few-objects} 3
fail 7525051 2024-01-21 16:37:12 2024-01-21 19:22:58 2024-01-21 19:49:49 0:26:51 0:16:36 0:10:15 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/classic task/test_cephadm} 1
Failure Reason:

Command failed (workunit test cephadm/test_cephadm.sh) on smithi052 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=ea310589757b987e8ba9e9ba96dfa9a6f9c1e8ec TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_cephadm.sh'

fail 7525052 2024-01-21 16:37:13 2024-01-21 19:22:59 2024-01-21 20:16:50 0:53:51 0:41:44 0:12:07 smithi main ubuntu 20.04 rados/cephadm/smoke-roleless/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-services/nfs-ingress 3-final} 2
Failure Reason:

reached maximum tries (301) after waiting for 300 seconds

fail 7525053 2024-01-21 16:37:14 2024-01-21 19:24:59 2024-01-21 19:55:45 0:30:46 0:20:11 0:10:35 smithi main centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-inline/yes 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

Command failed on smithi156 with status 5: 'sudo systemctl stop ceph-5d2872b0-b895-11ee-95b0-87774f69a715@mon.smithi156'

fail 7525054 2024-01-21 16:37:15 2024-01-21 19:26:30 2024-01-21 20:52:12 1:25:42 1:16:19 0:09:23 smithi main centos 8.stream rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.stream_container_tools 1-bootstrap/octopus 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
Failure Reason:

Command failed on smithi043 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image docker.io/ceph/ceph:v15 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 116a9fec-b895-11ee-95b0-87774f69a715 -e sha1=ea310589757b987e8ba9e9ba96dfa9a6f9c1e8ec -- bash -c \'ceph versions | jq -e \'"\'"\'.overall | length == 1\'"\'"\'\''

fail 7525055 2024-01-21 16:37:16 2024-01-21 19:27:00 2024-01-21 20:10:10 0:43:10 0:33:14 0:09:56 smithi main centos 8.stream rados/cephadm/smoke-roleless/{0-distro/centos_8.stream_container_tools 0-nvme-loop 1-start 2-services/nfs-ingress2 3-final} 2
Failure Reason:

reached maximum tries (301) after waiting for 300 seconds