Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
pass 7423696 2023-10-12 22:09:41 2023-10-12 22:10:18 2023-10-12 23:10:37 1:00:19 0:51:03 0:09:16 smithi main ubuntu 20.04 orch:cephadm/upgrade/{1-start-distro/1-start-ubuntu_20.04 2-repo_digest/repo_digest 3-upgrade/staggered 4-wait 5-upgrade-ls mon_election/connectivity} 2
fail 7423697 2023-10-12 22:09:42 2023-10-12 22:10:28 2023-10-12 22:43:29 0:33:01 0:23:04 0:09:57 smithi main centos 8.stream orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-inline/no 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

Command failed on smithi195 with status 5: 'sudo systemctl stop ceph-e4069eba-694e-11ee-8db6-212e2dc638e7@mon.smithi195'

fail 7423698 2023-10-12 22:09:43 2023-10-12 22:10:29 2023-10-12 23:06:52 0:56:23 0:46:10 0:10:13 smithi main ubuntu 20.04 orch:cephadm/smoke-roleless/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-services/nfs-ingress2 3-final} 2
Failure Reason:

reached maximum tries (301) after waiting for 300 seconds

fail 7423699 2023-10-12 22:09:44 2023-10-12 22:10:39 2023-10-12 22:46:45 0:36:06 0:26:17 0:09:49 smithi main centos 8.stream orch:cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/classic task/test_nfs} 1
Failure Reason:

Test failure: test_cluster_info (tasks.cephfs.test_nfs.TestNFS)

pass 7423700 2023-10-12 22:09:44 2023-10-12 22:11:39 2023-10-12 23:03:47 0:52:08 0:41:04 0:11:04 smithi main ubuntu 18.04 orch:cephadm/with-work/{0-distro/ubuntu_18.04 fixed-2 mode/packaged mon_election/classic msgr/async start tasks/rados_api_tests} 2
fail 7423701 2023-10-12 22:09:45 2023-10-12 22:12:00 2023-10-12 22:42:29 0:30:29 0:19:08 0:11:21 smithi main centos 8.stream orch:cephadm/mgr-nfs-upgrade/{0-distro/centos_8.stream_container_tools 1-bootstrap/16.2.5 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
Failure Reason:

Command failed on smithi158 with status 5: 'sudo systemctl stop ceph-d33f3920-694e-11ee-8db6-212e2dc638e7@mon.smithi158'

fail 7423702 2023-10-12 22:09:46 2023-10-12 22:12:40 2023-10-12 22:45:47 0:33:07 0:23:49 0:09:18 smithi main centos 8.stream orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-inline/yes 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

Command failed on smithi078 with status 5: 'sudo systemctl stop ceph-5754d954-694f-11ee-8db6-212e2dc638e7@mon.smithi078'

pass 7423703 2023-10-12 22:09:47 2023-10-12 22:12:41 2023-10-12 22:48:08 0:35:27 0:23:01 0:12:26 smithi main ubuntu 18.04 orch:cephadm/osds/{0-distro/ubuntu_18.04 0-nvme-loop 1-start 2-ops/rmdir-reactivate} 2
fail 7423704 2023-10-12 22:09:48 2023-10-12 22:13:51 2023-10-12 22:36:32 0:22:41 0:15:23 0:07:18 smithi main rhel 8.4 orch:cephadm/smoke-roleless/{0-distro/rhel_8.4_container_tools_rhel8 0-nvme-loop 1-start 2-services/rgw-ingress 3-final} 2
Failure Reason:

Command failed on smithi053 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:3da7aaee8cb8051dac57683eeb2f18b9da99d2c9 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid fb44f766-694e-11ee-8db6-212e2dc638e7 -- ceph mon dump -f json'

fail 7423705 2023-10-12 22:09:49 2023-10-12 22:13:51 2023-10-12 22:32:41 0:18:50 0:08:15 0:10:35 smithi main orch:cephadm/dashboard/{0-distro/ignorelist_health task/test_e2e} 2
Failure Reason:

Failed to fetch package version from https://shaman.ceph.com/api/search/?status=ready&project=ceph&flavor=default&distros=ubuntu%2F22.04%2Fx86_64&sha1=3da7aaee8cb8051dac57683eeb2f18b9da99d2c9

fail 7423706 2023-10-12 22:09:49 2023-10-12 22:15:12 2023-10-12 22:49:46 0:34:34 0:23:59 0:10:35 smithi main centos 8.stream orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-inline/no 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

Command failed on smithi163 with status 5: 'sudo systemctl stop ceph-f2c19ecc-694f-11ee-8db6-212e2dc638e7@mon.smithi163'

pass 7423707 2023-10-12 22:09:50 2023-10-12 22:15:42 2023-10-12 23:07:29 0:51:47 0:42:27 0:09:20 smithi main ubuntu 18.04 orch:cephadm/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/mimic backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{ubuntu_18.04} mon_election/connectivity msgr-failures/fastclose rados thrashers/mapgap thrashosds-health workloads/rbd_cls} 3
fail 7423708 2023-10-12 22:09:51 2023-10-12 22:15:53 2023-10-12 22:45:08 0:29:15 0:18:50 0:10:25 smithi main centos 8.stream orch:cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/classic task/test_cephadm} 1
Failure Reason:

Command failed (workunit test cephadm/test_cephadm.sh) on smithi184 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=3da7aaee8cb8051dac57683eeb2f18b9da99d2c9 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_cephadm.sh'

fail 7423709 2023-10-12 22:09:52 2023-10-12 22:15:53 2023-10-12 22:51:33 0:35:40 0:22:47 0:12:53 smithi main centos 8.stream orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-inline/yes 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

Command failed on smithi118 with status 5: 'sudo systemctl stop ceph-09057d52-6950-11ee-8db6-212e2dc638e7@mon.smithi118'

pass 7423710 2023-10-12 22:09:53 2023-10-12 22:17:44 2023-10-12 23:02:55 0:45:11 0:33:43 0:11:28 smithi main centos 8.stream orch:cephadm/thrash/{0-distro/centos_8.stream_container_tools 1-start 2-thrash 3-tasks/small-objects fixed-2 msgr/async-v2only root} 2
pass 7423711 2023-10-12 22:09:54 2023-10-12 22:19:14 2023-10-13 01:19:47 3:00:33 2:49:57 0:10:36 smithi main centos 8.stream orch:cephadm/mgr-nfs-upgrade/{0-distro/centos_8.stream_container_tools 1-bootstrap/octopus 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
pass 7423712 2023-10-12 22:09:55 2023-10-12 22:20:05 2023-10-12 23:09:13 0:49:08 0:29:07 0:20:01 smithi main ubuntu 18.04 orch:cephadm/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/nautilus-v2only backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{ubuntu_18.04} mon_election/connectivity msgr-failures/osd-delay rados thrashers/none thrashosds-health workloads/test_rbd_api} 3
fail 7423713 2023-10-12 22:09:55 2023-10-12 22:21:15 2023-10-12 22:51:42 0:30:27 0:20:16 0:10:11 smithi main ubuntu 20.04 orch:cephadm/upgrade/{1-start-distro/1-start-ubuntu_20.04-15.2.9 2-repo_digest/repo_digest 3-upgrade/simple 4-wait 5-upgrade-ls mon_election/connectivity} 2
Failure Reason:

timeout expired in wait_until_healthy

fail 7423714 2023-10-12 22:09:56 2023-10-12 22:21:36 2023-10-12 22:57:18 0:35:42 0:23:08 0:12:34 smithi main centos 8.stream orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-inline/no 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

Command failed on smithi138 with status 5: 'sudo systemctl stop ceph-f3e6fdc8-6950-11ee-8db6-212e2dc638e7@mon.smithi138'

fail 7423715 2023-10-12 22:09:57 2023-10-12 22:22:56 2023-10-12 22:58:03 0:35:07 0:26:37 0:08:30 smithi main centos 8.stream orch:cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/connectivity task/test_nfs} 1
Failure Reason:

Test failure: test_cluster_info (tasks.cephfs.test_nfs.TestNFS)

pass 7423716 2023-10-12 22:09:58 2023-10-12 22:22:57 2023-10-13 00:31:31 2:08:34 1:59:23 0:09:11 smithi main ubuntu 18.04 orch:cephadm/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/octopus backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{ubuntu_18.04} mon_election/connectivity msgr-failures/few rados thrashers/careful thrashosds-health workloads/radosbench} 3
pass 7423717 2023-10-12 22:09:59 2023-10-12 22:23:57 2023-10-12 23:08:15 0:44:18 0:24:32 0:19:46 smithi main ubuntu 18.04 orch:cephadm/smoke/{0-nvme-loop distro/ubuntu_18.04 fixed-2 mon_election/classic start} 2
pass 7423718 2023-10-12 22:10:00 2023-10-12 22:24:28 2023-10-12 23:14:48 0:50:20 0:40:44 0:09:36 smithi main ubuntu 18.04 orch:cephadm/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/luminous-v1only backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{ubuntu_18.04} mon_election/classic msgr-failures/osd-delay rados thrashers/default thrashosds-health workloads/rbd_cls} 3
fail 7423719 2023-10-12 22:10:01 2023-10-12 22:24:48 2023-10-12 23:23:12 0:58:24 0:47:05 0:11:19 smithi main ubuntu 18.04 orch:cephadm/smoke-roleless/{0-distro/ubuntu_18.04 0-nvme-loop 1-start 2-services/nfs-ingress-rgw-bucket 3-final} 2
Failure Reason:

reached maximum tries (301) after waiting for 300 seconds

fail 7423720 2023-10-12 22:10:01 2023-10-12 22:25:29 2023-10-12 22:56:57 0:31:28 0:22:37 0:08:51 smithi main centos 8.stream orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-inline/yes 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

Command failed on smithi172 with status 5: 'sudo systemctl stop ceph-079ada9c-6951-11ee-8db6-212e2dc638e7@mon.smithi172'

fail 7423721 2023-10-12 22:10:02 2023-10-12 22:25:39 2023-10-12 22:48:51 0:23:12 smithi main ubuntu 18.04 orch:cephadm/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/luminous backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{ubuntu_18.04} mon_election/connectivity msgr-failures/fastclose rados thrashers/mapgap thrashosds-health workloads/snaps-few-objects} 3
Failure Reason:

Failed to reconnect to smithi059