Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
pass 7535084 2024-01-26 16:12:25 2024-01-27 22:34:47 2024-01-27 23:18:54 0:44:07 0:31:45 0:12:22 smithi main ubuntu 20.04 rados/cephadm/upgrade/{1-start-distro/1-start-ubuntu_20.04 2-repo_digest/defaut 3-upgrade/simple 4-wait 5-upgrade-ls mon_election/connectivity} 2
pass 7535085 2024-01-26 16:12:26 2024-01-27 22:36:57 2024-01-27 23:19:43 0:42:46 0:31:51 0:10:55 smithi main centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-inline/yes 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
fail 7535086 2024-01-26 16:12:27 2024-01-27 22:38:28 2024-01-27 22:58:59 0:20:31 smithi main ubuntu 18.04 rados/upgrade/nautilus-x-singleton/{0-cluster/{openstack start} 1-install/nautilus 2-partial-upgrade/firsthalf 3-thrash/default 4-workload/{rbd-cls rbd-import-export readwrite snaps-few-objects} 5-workload/{radosbench rbd_api} 6-finish-upgrade 7-pacific 8-workload/{rbd-python snaps-many-objects} bluestore-bitmap mon_election/classic thrashosds-health ubuntu_18.04} 4
Failure Reason:

Failed to reconnect to smithi071

pass 7535087 2024-01-26 16:12:28 2024-01-27 22:39:08 2024-01-27 23:09:59 0:30:51 0:21:57 0:08:54 smithi main centos 8.stream rados/cephadm/dashboard/{0-distro/centos_8.stream_container_tools task/test_e2e} 2
pass 7535088 2024-01-26 16:12:28 2024-01-27 22:39:09 2024-01-27 23:15:16 0:36:07 0:25:32 0:10:35 smithi main centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-inline/no 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
pass 7535089 2024-01-26 16:12:29 2024-01-27 22:40:09 2024-01-27 23:20:13 0:40:04 0:28:47 0:11:17 smithi main centos 8.stream rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.stream_container_tools 1-bootstrap/16.2.4 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
fail 7535090 2024-01-26 16:12:30 2024-01-27 22:40:30 2024-01-27 23:06:27 0:25:57 0:18:59 0:06:58 smithi main rhel 8.6 rados/cephadm/rbd_iscsi/{base/install cluster/{fixed-3 openstack} pool/datapool supported-random-distro$/{rhel_8} workloads/ceph_iscsi} 3
Failure Reason:

'package_manager_version'

fail 7535091 2024-01-26 16:12:31 2024-01-27 22:41:10 2024-01-27 23:06:38 0:25:28 0:16:44 0:08:44 smithi main centos 8.stream rados/cephadm/upgrade/{1-start-distro/1-start-centos_8.stream_container-tools 2-repo_digest/defaut 3-upgrade/simple 4-wait 5-upgrade-ls mon_election/classic} 2
Failure Reason:

Command failed on smithi161 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image docker.io/ceph/ceph:v15.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 4b637562-bd67-11ee-95b2-87774f69a715 -e sha1=a72a5dace2353a54147d37bd2699a2a482507966 -- bash -c \'ceph versions | jq -e \'"\'"\'.overall | length == 1\'"\'"\'\''

fail 7535092 2024-01-26 16:12:32 2024-01-27 22:41:21 2024-01-27 23:02:05 0:20:44 smithi main ubuntu 18.04 rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/nautilus backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{ubuntu_18.04} mon_election/classic msgr-failures/osd-delay rados thrashers/morepggrow thrashosds-health workloads/test_rbd_api} 3
Failure Reason:

Failed to reconnect to smithi002

pass 7535093 2024-01-26 16:12:33 2024-01-27 22:42:01 2024-01-27 23:06:55 0:24:54 0:15:56 0:08:58 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/connectivity task/test_cephadm} 1
pass 7535094 2024-01-26 16:12:33 2024-01-27 22:42:01 2024-01-27 23:17:13 0:35:12 0:26:03 0:09:09 smithi main centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-inline/yes 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
pass 7535095 2024-01-26 16:12:34 2024-01-27 22:42:02 2024-01-27 23:33:53 0:51:51 0:29:29 0:22:22 smithi main ubuntu 18.04 rados/cephadm/with-work/{0-distro/ubuntu_18.04 fixed-2 mode/root mon_election/connectivity msgr/async-v2only start tasks/rados_python} 2
fail 7535096 2024-01-26 16:12:35 2024-01-27 22:43:42 2024-01-27 23:05:28 0:21:46 smithi main ubuntu 18.04 rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/luminous-v1only backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{ubuntu_18.04} mon_election/classic msgr-failures/few rados thrashers/careful thrashosds-health workloads/radosbench} 3
Failure Reason:

Failed to reconnect to smithi110

fail 7535097 2024-01-26 16:12:36 2024-01-27 22:44:33 2024-01-27 23:06:19 0:21:46 0:12:01 0:09:45 smithi main centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-inline/no 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

Command failed on smithi194 with status 127: 'sudo /home/ubuntu/cephtest/cephadm --image docker.io/ceph/ceph:v16.2.4 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 3a944602-bd68-11ee-95b2-87774f69a715 -- ceph mon dump -f json'

fail 7535098 2024-01-26 16:12:37 2024-01-27 22:45:13 2024-01-27 23:34:51 0:49:38 0:36:59 0:12:39 smithi main ubuntu 20.04 rados/cephadm/upgrade/{1-start-distro/1-start-ubuntu_20.04-15.2.9 2-repo_digest/repo_digest 3-upgrade/staggered 4-wait 5-upgrade-ls mon_election/connectivity} 2
Failure Reason:

Command failed on smithi043 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image docker.io/ceph/ceph:v15.2.9 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 09af4c44-bd68-11ee-95b2-87774f69a715 -e sha1=a72a5dace2353a54147d37bd2699a2a482507966 -- bash -c \'ceph orch upgrade start --image quay.ceph.io/ceph-ci/ceph:$sha1 --daemon-types mon --hosts $(ceph orch ps | grep mgr.x | awk \'"\'"\'{print $2}\'"\'"\')\''

pass 7535099 2024-01-26 16:12:38 2024-01-27 22:47:04 2024-01-27 23:18:43 0:31:39 0:21:36 0:10:03 smithi main ubuntu 18.04 rados/cephadm/smoke/{0-nvme-loop distro/ubuntu_18.04 fixed-2 mon_election/connectivity start} 2
pass 7535100 2024-01-26 16:12:39 2024-01-27 22:47:04 2024-01-27 23:33:25 0:46:21 0:35:48 0:10:33 smithi main ubuntu 18.04 rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/luminous backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{ubuntu_18.04} mon_election/connectivity msgr-failures/osd-delay rados thrashers/default thrashosds-health workloads/rbd_cls} 3
pass 7535101 2024-01-26 16:12:39 2024-01-27 22:47:35 2024-01-27 23:22:39 0:35:04 0:25:53 0:09:11 smithi main centos 8.stream rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.stream_container_tools 1-bootstrap/16.2.5 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
pass 7535102 2024-01-26 16:12:40 2024-01-27 22:47:45 2024-01-27 23:16:37 0:28:52 0:17:43 0:11:09 smithi main centos 8.stream rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-comp-lz4 rados tasks/rados_cls_all validater/lockdep} 2
pass 7535103 2024-01-26 16:12:41 2024-01-27 22:49:26 2024-01-27 23:26:15 0:36:49 0:27:00 0:09:49 smithi main centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-inline/yes 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
pass 7535104 2024-01-26 16:12:42 2024-01-27 22:49:26 2024-01-27 23:21:17 0:31:51 0:21:42 0:10:09 smithi main centos 8.stream rados/cephadm/dashboard/{0-distro/ignorelist_health task/test_e2e} 2
pass 7535105 2024-01-26 16:12:43 2024-01-27 22:49:57 2024-01-27 23:29:04 0:39:07 0:26:11 0:12:56 smithi main centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-inline/no 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
pass 7535106 2024-01-26 16:12:43 2024-01-27 22:54:08 2024-01-27 23:38:57 0:44:49 0:31:35 0:13:14 smithi main ubuntu 20.04 rados/cephadm/upgrade/{1-start-distro/1-start-ubuntu_20.04 2-repo_digest/defaut 3-upgrade/simple 4-wait 5-upgrade-ls mon_election/classic} 2
pass 7535107 2024-01-26 16:12:44 2024-01-27 22:56:38 2024-01-27 23:41:52 0:45:14 0:33:27 0:11:47 smithi main ubuntu 18.04 rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/mimic backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{ubuntu_18.04} mon_election/connectivity msgr-failures/few rados thrashers/morepggrow thrashosds-health workloads/test_rbd_api} 3
fail 7535108 2024-01-26 16:12:45 2024-01-27 22:58:29 2024-01-27 23:24:00 0:25:31 0:15:39 0:09:52 smithi main ubuntu 18.04 rados/cephadm/osds/{0-distro/ubuntu_18.04 0-nvme-loop 1-start 2-ops/rm-zap-add} 2
Failure Reason:

Command failed on smithi064 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:a72a5dace2353a54147d37bd2699a2a482507966 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 7fbccca8-bd69-11ee-95b2-87774f69a715 -- bash -c \'set -e\nset -x\nceph orch ps\nceph orch device ls\nDEVID=$(ceph device ls | grep osd.1 | awk \'"\'"\'{print $1}\'"\'"\')\nHOST=$(ceph orch device ls | grep $DEVID | awk \'"\'"\'{print $1}\'"\'"\')\nDEV=$(ceph orch device ls | grep $DEVID | awk \'"\'"\'{print $2}\'"\'"\')\necho "host $HOST, dev $DEV, devid $DEVID"\nceph orch osd rm 1\nwhile ceph orch osd rm status | grep ^1 ; do sleep 5 ; done\nceph orch device zap $HOST $DEV --force\nceph orch daemon add osd $HOST:$DEV\nwhile ! ceph osd dump | grep osd.1 | grep up ; do sleep 5 ; done\n\''

pass 7535109 2024-01-26 16:12:46 2024-01-27 22:58:39 2024-01-27 23:24:12 0:25:33 0:15:22 0:10:11 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/classic task/test_cephadm} 1
pass 7535110 2024-01-26 16:12:47 2024-01-27 22:58:40 2024-01-27 23:35:57 0:37:17 0:27:03 0:10:14 smithi main centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-inline/yes 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
fail 7535111 2024-01-26 16:12:48 2024-01-27 22:59:00 2024-01-27 23:25:49 0:26:49 0:17:43 0:09:06 smithi main centos 8.stream rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.stream_container_tools 1-bootstrap/octopus 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
Failure Reason:

Command failed on smithi055 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image docker.io/ceph/ceph:v15 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 9a285dbe-bd69-11ee-95b2-87774f69a715 -e sha1=a72a5dace2353a54147d37bd2699a2a482507966 -- bash -c \'ceph versions | jq -e \'"\'"\'.overall | length == 1\'"\'"\'\''

pass 7535112 2024-01-26 16:12:48 2024-01-27 22:59:01 2024-01-27 23:29:26 0:30:25 0:20:58 0:09:27 smithi main ubuntu 18.04 rados/cephadm/smoke-roleless/{0-distro/ubuntu_18.04 0-nvme-loop 1-start 2-services/nfs-ingress2 3-final} 2
pass 7535113 2024-01-26 16:12:49 2024-01-27 22:59:11 2024-01-27 23:39:41 0:40:30 0:26:35 0:13:55 smithi main centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-inline/yes 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
pass 7535114 2024-01-26 16:12:50 2024-01-27 23:02:02 2024-01-27 23:35:28 0:33:26 0:22:38 0:10:48 smithi main centos 8.stream rados/cephadm/dashboard/{0-distro/centos_8.stream_container_tools task/test_e2e} 2
pass 7535115 2024-01-26 16:12:51 2024-01-27 23:02:12 2024-01-27 23:39:46 0:37:34 0:27:03 0:10:31 smithi main centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-inline/no 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
pass 7535116 2024-01-26 16:12:52 2024-01-27 23:03:03 2024-01-27 23:41:59 0:38:56 0:29:05 0:09:51 smithi main centos 8.stream rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.stream_container_tools 1-bootstrap/16.2.4 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
fail 7535117 2024-01-26 16:12:53 2024-01-27 23:03:03 2024-01-27 23:25:07 0:22:04 0:11:34 0:10:30 smithi main centos 8.stream rados/cephadm/rbd_iscsi/{base/install cluster/{fixed-3 openstack} pool/datapool supported-random-distro$/{centos_8} workloads/ceph_iscsi} 3
Failure Reason:

'package_manager_version'

fail 7535118 2024-01-26 16:12:54 2024-01-27 23:03:43 2024-01-27 23:36:35 0:32:52 0:19:50 0:13:02 smithi main ubuntu 20.04 rados/cephadm/upgrade/{1-start-distro/1-start-ubuntu_20.04-15.2.9 2-repo_digest/defaut 3-upgrade/simple 4-wait 5-upgrade-ls mon_election/classic} 2
Failure Reason:

timeout expired in wait_until_healthy

fail 7535119 2024-01-26 16:12:54 2024-01-27 23:05:34 2024-01-27 23:52:34 0:47:00 0:41:19 0:05:41 smithi main rhel 8.6 rados/cephadm/smoke-roleless/{0-distro/rhel_8.6_container_tools_3.0 0-nvme-loop 1-start 2-services/rgw-ingress 3-final} 2
Failure Reason:

reached maximum tries (301) after waiting for 300 seconds

fail 7535120 2024-01-26 16:12:55 2024-01-27 23:05:35 2024-01-27 23:33:40 0:28:05 0:17:14 0:10:51 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/connectivity task/test_cephadm} 1
Failure Reason:

Command failed (workunit test cephadm/test_cephadm.sh) on smithi049 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=585bdf827237450fb80df4a856eba434a64948fa TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_cephadm.sh'

fail 7535121 2024-01-26 16:12:56 2024-01-27 23:05:35 2024-01-27 23:44:53 0:39:18 0:30:26 0:08:52 smithi main centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-inline/yes 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

timeout expired in wait_until_healthy

fail 7535122 2024-01-26 16:12:57 2024-01-27 23:05:45 2024-01-27 23:37:20 0:31:35 0:20:31 0:11:04 smithi main centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-inline/no 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

Command failed on smithi194 with status 5: 'sudo systemctl stop ceph-2005e414-bd6b-11ee-95b2-87774f69a715@mon.smithi194'

pass 7535123 2024-01-26 16:12:58 2024-01-27 23:06:26 2024-01-27 23:38:09 0:31:43 0:21:16 0:10:27 smithi main ubuntu 18.04 rados/cephadm/smoke/{0-nvme-loop distro/ubuntu_18.04 fixed-2 mon_election/connectivity start} 2
pass 7535124 2024-01-26 16:12:59 2024-01-27 23:06:36 2024-01-27 23:41:51 0:35:15 0:25:17 0:09:58 smithi main centos 8.stream rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.stream_container_tools 1-bootstrap/16.2.5 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
fail 7535125 2024-01-26 16:12:59 2024-01-27 23:06:47 2024-01-27 23:40:21 0:33:34 0:20:41 0:12:53 smithi main centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-inline/yes 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

Command failed on smithi153 with status 5: 'sudo systemctl stop ceph-a0ae3044-bd6b-11ee-95b2-87774f69a715@mon.smithi153'

pass 7535126 2024-01-26 16:13:00 2024-01-27 23:06:57 2024-01-27 23:54:35 0:47:38 0:36:03 0:11:35 smithi main ubuntu 18.04 rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/mimic backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{ubuntu_18.04} mon_election/connectivity msgr-failures/fastclose rados thrashers/pggrow thrashosds-health workloads/rbd_cls} 3
fail 7535127 2024-01-26 16:13:01 2024-01-27 23:08:18 2024-01-27 23:52:03 0:43:45 0:36:15 0:07:30 smithi main rhel 8.6 rados/cephadm/smoke-roleless/{0-distro/rhel_8.6_container_tools_rhel8 0-nvme-loop 1-start 2-services/nfs-ingress-rgw-bucket 3-final} 2
Failure Reason:

reached maximum tries (301) after waiting for 300 seconds

pass 7535128 2024-01-26 16:13:02 2024-01-27 23:08:38 2024-01-27 23:41:02 0:32:24 0:22:19 0:10:05 smithi main centos 8.stream rados/cephadm/dashboard/{0-distro/ignorelist_health task/test_e2e} 2
fail 7535129 2024-01-26 16:13:03 2024-01-27 23:10:09 2024-01-27 23:42:17 0:32:08 0:20:44 0:11:24 smithi main centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-inline/no 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

Command failed on smithi118 with status 5: 'sudo systemctl stop ceph-e38b0b76-bd6b-11ee-95b2-87774f69a715@mon.smithi118'

fail 7535130 2024-01-26 16:13:04 2024-01-27 23:10:29 2024-01-27 23:37:25 0:26:56 0:17:46 0:09:10 smithi main centos 8.stream rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-stupid rados tasks/rados_cls_all validater/lockdep} 2
Failure Reason:

"2024-01-27T23:33:49.239682+0000 mon.a (mon.0) 476 : cluster [WRN] Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)" in cluster log

fail 7535131 2024-01-26 16:13:04 2024-01-27 23:10:29 2024-01-28 00:03:13 0:52:44 0:41:46 0:10:58 smithi main ubuntu 18.04 rados/cephadm/smoke-roleless/{0-distro/ubuntu_18.04 0-nvme-loop 1-start 2-services/nfs-ingress-rgw-user 3-final} 2
Failure Reason:

reached maximum tries (301) after waiting for 300 seconds

pass 7535132 2024-01-26 16:13:05 2024-01-27 23:11:50 2024-01-28 00:20:29 1:08:39 0:53:31 0:15:08 smithi main ubuntu 18.04 rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/nautilus-v1only backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{ubuntu_18.04} mon_election/classic msgr-failures/few rados thrashers/careful thrashosds-health workloads/snaps-few-objects} 3
fail 7535133 2024-01-26 16:13:06 2024-01-27 23:16:41 2024-01-27 23:41:49 0:25:08 0:16:22 0:08:46 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/classic task/test_cephadm} 1
Failure Reason:

Command failed (workunit test cephadm/test_cephadm.sh) on smithi084 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=585bdf827237450fb80df4a856eba434a64948fa TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_cephadm.sh'

fail 7535134 2024-01-26 16:13:07 2024-01-27 23:16:41 2024-01-28 00:13:29 0:56:48 0:45:03 0:11:45 smithi main ubuntu 20.04 rados/cephadm/smoke-roleless/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-services/nfs-ingress 3-final} 2
Failure Reason:

reached maximum tries (301) after waiting for 300 seconds

fail 7535135 2024-01-26 16:13:08 2024-01-27 23:17:22 2024-01-27 23:49:52 0:32:30 0:20:49 0:11:41 smithi main centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-inline/yes 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

Command failed on smithi188 with status 5: 'sudo systemctl stop ceph-e449db0e-bd6c-11ee-95b2-87774f69a715@mon.smithi188'

fail 7535136 2024-01-26 16:13:09 2024-01-27 23:18:52 2024-01-27 23:38:53 0:20:01 smithi main ubuntu 18.04 rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/nautilus-v2only backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{ubuntu_18.04} mon_election/connectivity msgr-failures/osd-delay rados thrashers/default thrashosds-health workloads/test_rbd_api} 3
Failure Reason:

Failed to reconnect to smithi112

fail 7535137 2024-01-26 16:13:09 2024-01-27 23:18:53 2024-01-28 01:06:07 1:47:14 1:37:55 0:09:19 smithi main centos 8.stream rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.stream_container_tools 1-bootstrap/octopus 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
Failure Reason:

Command failed on smithi038 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image docker.io/ceph/ceph:v15 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 6a612ac2-bd6c-11ee-95b2-87774f69a715 -e sha1=a72a5dace2353a54147d37bd2699a2a482507966 -- bash -c \'ceph versions | jq -e \'"\'"\'.overall | length == 1\'"\'"\'\''

fail 7535138 2024-01-26 16:13:10 2024-01-27 23:19:03 2024-01-28 00:02:39 0:43:36 0:33:39 0:09:57 smithi main centos 8.stream rados/cephadm/smoke-roleless/{0-distro/centos_8.stream_container_tools 0-nvme-loop 1-start 2-services/nfs-ingress2 3-final} 2
Failure Reason:

reached maximum tries (301) after waiting for 300 seconds