Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
fail 6894695 2022-06-23 15:14:38 2022-06-23 15:19:30 2022-06-23 15:33:08 0:13:38 0:06:40 0:06:58 smithi main centos 8.stream rados:cephadm/smoke/{0-distro/centos_8.stream_container_tools_crun 0-nvme-loop agent/off fixed-2 mon_election/classic start} 2
Failure Reason:

Command failed on smithi157 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:df831b0959db84cbce1e3a370ed7a14414c1bf4a shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 38be9dee-f309-11ec-842b-001a4aab830c -- ceph orch daemon add osd smithi157:/dev/nvme4n1'

fail 6894696 2022-06-23 15:14:39 2022-06-23 15:19:30 2022-06-23 15:41:37 0:22:07 0:13:29 0:08:38 smithi main centos 8.stream rados:cephadm/osds/{0-distro/centos_8.stream_container_tools 0-nvme-loop 1-start 2-ops/rm-zap-add} 2
Failure Reason:

reached maximum tries (120) after waiting for 120 seconds

fail 6894697 2022-06-23 15:14:40 2022-06-23 15:20:31 2022-06-23 15:46:22 0:25:51 0:19:27 0:06:24 smithi main rhel 8.6 rados:cephadm/smoke-singlehost/{0-random-distro$/{rhel_8.6_container_tools_3.0} 1-start 2-services/basic 3-final} 1
Failure Reason:

timeout expired in wait_until_healthy

pass 6894698 2022-06-23 15:14:41 2022-06-23 15:20:51 2022-06-23 15:34:12 0:13:21 0:04:17 0:09:04 smithi main rados:cephadm/workunits/{agent/off mon_election/connectivity task/test_cephadm_repos} 1
fail 6894699 2022-06-23 15:14:42 2022-06-23 15:21:12 2022-06-23 15:39:31 0:18:19 0:11:20 0:06:59 smithi main rhel 8.6 rados:cephadm/smoke/{0-distro/rhel_8.6_container_tools_3.0 0-nvme-loop agent/on fixed-2 mon_election/connectivity start} 2
Failure Reason:

Command failed on smithi112 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:df831b0959db84cbce1e3a370ed7a14414c1bf4a shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 2d75684a-f30a-11ec-842b-001a4aab830c -- ceph orch daemon add osd smithi112:/dev/nvme4n1'

fail 6894700 2022-06-23 15:14:44 2022-06-23 15:22:22 2022-06-23 15:43:39 0:21:17 0:14:18 0:06:59 smithi main centos 8.stream rados:cephadm/osds/{0-distro/centos_8.stream_container_tools_crun 0-nvme-loop 1-start 2-ops/rm-zap-flag} 2
Failure Reason:

reached maximum tries (120) after waiting for 120 seconds

pass 6894701 2022-06-23 15:14:45 2022-06-23 15:23:03 2022-06-23 15:41:33 0:18:30 0:12:10 0:06:20 smithi main centos 8.stream rados:cephadm/workunits/{agent/on mon_election/classic task/test_iscsi_pids_limit/{centos_8.stream_container_tools test_iscsi_pids_limit}} 1
fail 6894702 2022-06-23 15:14:46 2022-06-23 15:23:03 2022-06-23 15:40:58 0:17:55 0:10:51 0:07:04 smithi main rhel 8.6 rados:cephadm/smoke/{0-distro/rhel_8.6_container_tools_rhel8 0-nvme-loop agent/off fixed-2 mon_election/classic start} 2
Failure Reason:

Command failed on smithi074 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:df831b0959db84cbce1e3a370ed7a14414c1bf4a shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 522774f8-f30a-11ec-842b-001a4aab830c -- ceph orch daemon add osd smithi074:/dev/nvme4n1'

fail 6894703 2022-06-23 15:14:47 2022-06-23 15:23:54 2022-06-23 15:53:37 0:29:43 0:18:58 0:10:45 smithi main rados:cephadm/workunits/{agent/off mon_election/connectivity task/test_nfs} 1
Failure Reason:

timeout expired in wait_until_healthy

fail 6894704 2022-06-23 15:14:48 2022-06-23 15:24:24 2022-06-23 15:51:45 0:27:21 0:19:46 0:07:35 smithi main rhel 8.6 rados:cephadm/osds/{0-distro/rhel_8.6_container_tools_3.0 0-nvme-loop 1-start 2-ops/rm-zap-wait} 2
Failure Reason:

reached maximum tries (120) after waiting for 120 seconds

fail 6894705 2022-06-23 15:14:49 2022-06-23 15:24:35 2022-06-23 15:42:39 0:18:04 0:07:36 0:10:28 smithi main ubuntu 20.04 rados:cephadm/smoke/{0-distro/ubuntu_20.04 0-nvme-loop agent/on fixed-2 mon_election/connectivity start} 2
Failure Reason:

Command failed on smithi036 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:df831b0959db84cbce1e3a370ed7a14414c1bf4a shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 708e7108-f30a-11ec-842b-001a4aab830c -- ceph orch daemon add osd smithi036:/dev/nvme4n1'

pass 6894706 2022-06-23 15:14:50 2022-06-23 15:24:45 2022-06-23 15:52:29 0:27:44 0:17:12 0:10:32 smithi main rados:cephadm/workunits/{agent/on mon_election/classic task/test_orch_cli} 1
fail 6894707 2022-06-23 15:14:51 2022-06-23 15:25:15 2022-06-23 15:49:47 0:24:32 0:19:14 0:05:18 smithi main rhel 8.6 rados:cephadm/osds/{0-distro/rhel_8.6_container_tools_rhel8 0-nvme-loop 1-start 2-ops/rmdir-reactivate} 2
Failure Reason:

reached maximum tries (120) after waiting for 120 seconds

fail 6894708 2022-06-23 15:14:52 2022-06-23 15:25:16 2022-06-23 15:38:22 0:13:06 0:06:39 0:06:27 smithi main centos 8.stream rados:cephadm/smoke/{0-distro/centos_8.stream_container_tools 0-nvme-loop agent/on fixed-2 mon_election/classic start} 2
Failure Reason:

Command failed on smithi042 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:df831b0959db84cbce1e3a370ed7a14414c1bf4a shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid fe4c1424-f309-11ec-842b-001a4aab830c -- ceph orch daemon add osd smithi042:/dev/nvme4n1'

fail 6894709 2022-06-23 15:14:53 2022-06-23 15:25:26 2022-06-23 15:48:18 0:22:52 0:13:42 0:09:10 smithi main centos 8.stream rados:cephadm/smoke-singlehost/{0-random-distro$/{centos_8.stream_container_tools} 1-start 2-services/rgw 3-final} 1
Failure Reason:

timeout expired in wait_until_healthy

pass 6894710 2022-06-23 15:14:54 2022-06-23 15:26:17 2022-06-23 15:45:17 0:19:00 0:07:43 0:11:17 smithi main rados:cephadm/workunits/{agent/on mon_election/classic task/test_adoption} 1
fail 6894711 2022-06-23 15:14:55 2022-06-23 15:26:17 2022-06-23 15:52:15 0:25:58 0:18:13 0:07:45 smithi main rhel 8.6 rados:cephadm/osds/{0-distro/rhel_8.6_container_tools_rhel8 0-nvme-loop 1-start 2-ops/repave-all} 2
Failure Reason:

reached maximum tries (120) after waiting for 120 seconds

pass 6894712 2022-06-23 15:14:56 2022-06-23 15:26:47 2022-06-23 15:52:56 0:26:09 0:16:10 0:09:59 smithi main rados:cephadm/workunits/{agent/off mon_election/connectivity task/test_cephadm} 1
fail 6894713 2022-06-23 15:14:57 2022-06-23 15:27:18 2022-06-23 15:41:08 0:13:50 0:06:38 0:07:12 smithi main centos 8.stream rados:cephadm/smoke/{0-distro/centos_8.stream_container_tools_crun 0-nvme-loop agent/off fixed-2 mon_election/connectivity start} 2
Failure Reason:

Command failed on smithi107 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:df831b0959db84cbce1e3a370ed7a14414c1bf4a shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 5a0c2600-f30a-11ec-842b-001a4aab830c -- ceph orch daemon add osd smithi107:/dev/nvme4n1'

pass 6894714 2022-06-23 15:14:59 2022-06-23 15:27:58 2022-06-23 15:42:13 0:14:15 0:04:20 0:09:55 smithi main rados:cephadm/workunits/{agent/on mon_election/classic task/test_cephadm_repos} 1
fail 6894715 2022-06-23 15:15:00 2022-06-23 15:28:39 2022-06-23 16:06:43 0:38:04 0:27:36 0:10:28 smithi main ubuntu 20.04 rados:cephadm/osds/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-ops/rm-zap-add} 2
Failure Reason:

reached maximum tries (120) after waiting for 120 seconds

fail 6894716 2022-06-23 15:15:01 2022-06-23 15:29:19 2022-06-23 15:48:32 0:19:13 0:11:28 0:07:45 smithi main rhel 8.6 rados:cephadm/smoke/{0-distro/rhel_8.6_container_tools_3.0 0-nvme-loop agent/on fixed-2 mon_election/classic start} 2
Failure Reason:

Command failed on smithi160 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:df831b0959db84cbce1e3a370ed7a14414c1bf4a shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 70905fc6-f30b-11ec-842b-001a4aab830c -- ceph orch daemon add osd smithi160:/dev/nvme4n1'