Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
fail 7116859 2022-12-14 21:15:26 2022-12-15 04:39:34 2022-12-15 05:01:25 0:21:51 0:08:43 0:13:08 smithi main ubuntu 20.04 orch:cephadm/upgrade/{1-start-distro/1-start-ubuntu_20.04 2-repo_digest/repo_digest 3-upgrade/staggered 4-wait 5-upgrade-ls agent/off mon_election/connectivity} 2
Failure Reason:

Command failed on smithi032 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v16.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid dbde439e-7c34-11ed-8443-001a4aab830c -- ceph orch daemon add osd smithi032:vg_nvme/lv_4'

dead 7116860 2022-12-14 21:15:42 2022-12-15 04:40:09 2022-12-15 04:43:01 0:02:52 smithi main centos 8.stream orch:cephadm/osds/{0-distro/centos_8.stream_container_tools 0-nvme-loop 1-start 2-ops/rm-zap-add} 2
Failure Reason:

Error reimaging machines: Failed to power on smithi132

pass 7116861 2022-12-14 21:15:58 2022-12-15 04:40:58 2022-12-15 05:12:29 0:31:31 0:22:59 0:08:32 smithi main rhel 8.6 orch:cephadm/workunits/{0-distro/rhel_8.6_container_tools_rhel8 agent/off mon_election/classic task/test_cephadm} 1
fail 7116862 2022-12-14 21:16:09 2022-12-15 04:41:19 2022-12-15 05:06:40 0:25:21 0:14:20 0:11:01 smithi main centos 8.stream orch:cephadm/with-work/{0-distro/centos_8.stream_container_tools_crun fixed-2 mode/packaged mon_election/classic msgr/async-v1only start tasks/rados_api_tests} 2
Failure Reason:

Command failed on smithi085 with status 22: 'sudo cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:5c68bf97e405dbc8011a44bde3209055716f1129 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 7823efb0-7c35-11ed-8443-001a4aab830c -- ceph orch device zap smithi085 /dev/vg_nvme/lv_4 --force'

fail 7116863 2022-12-14 21:16:20 2022-12-15 04:41:24 2022-12-15 05:11:35 0:30:11 0:18:46 0:11:25 smithi main centos 8.stream orch:cephadm/smoke-roleless/{0-distro/centos_8.stream_container_tools_crun 0-nvme-loop 1-start 2-services/jaeger 3-final} 2
Failure Reason:

reached maximum tries (120) after waiting for 120 seconds

fail 7116864 2022-12-14 21:16:25 2022-12-15 04:41:55 2022-12-15 05:08:24 0:26:29 0:14:41 0:11:48 smithi main centos 8.stream orch:cephadm/thrash/{0-distro/centos_8.stream_container_tools_crun 1-start 2-thrash 3-tasks/small-objects fixed-2 msgr/async-v1only root} 2
Failure Reason:

Command failed on smithi038 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:5c68bf97e405dbc8011a44bde3209055716f1129 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid c03004ba-7c35-11ed-8443-001a4aab830c -- ceph orch device zap smithi038 /dev/vg_nvme/lv_4 --force'

fail 7116865 2022-12-14 21:16:36 2022-12-15 04:42:30 2022-12-15 05:04:15 0:21:45 0:10:26 0:11:19 smithi main centos 8.stream orch:cephadm/mgr-nfs-upgrade/{0-centos_8.stream_container_tools 1-bootstrap/16.2.0 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
Failure Reason:

Command failed on smithi071 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v16.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 34eadf88-7c35-11ed-8443-001a4aab830c -- ceph orch daemon add osd smithi071:vg_nvme/lv_4'

pass 7116866 2022-12-14 21:16:43 2022-12-15 04:43:33 2022-12-15 05:08:40 0:25:07 0:16:15 0:08:52 smithi main centos 8.stream orch:cephadm/orchestrator_cli/{0-random-distro$/{centos_8.stream_container_tools} 2-node-mgr agent/off orchestrator_cli} 2
fail 7116867 2022-12-14 21:16:49 2022-12-15 04:43:34 2022-12-15 05:05:41 0:22:07 0:11:21 0:10:46 smithi main centos 8.stream orch:cephadm/rbd_iscsi/{0-single-container-host base/install cluster/{fixed-3 openstack} workloads/cephadm_iscsi} 3
Failure Reason:

Command failed on smithi008 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:5c68bf97e405dbc8011a44bde3209055716f1129 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 73409944-7c35-11ed-8443-001a4aab830c -- ceph orch device zap smithi008 /dev/vg_nvme/lv_4 --force'

fail 7116868 2022-12-14 21:17:05 2022-12-15 04:44:34 2022-12-15 05:10:03 0:25:29 0:14:51 0:10:38 smithi main rhel 8.6 orch:cephadm/smoke-singlehost/{0-random-distro$/{rhel_8.6_container_tools_rhel8} 1-start 2-services/basic 3-final} 1
Failure Reason:

Command failed on smithi183 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:5c68bf97e405dbc8011a44bde3209055716f1129 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 371fd6ae-7c36-11ed-8443-001a4aab830c -- ceph orch device zap smithi183 /dev/vg_nvme/lv_4 --force'

fail 7116869 2022-12-14 21:17:11 2022-12-15 04:45:26 2022-12-15 05:20:12 0:34:46 0:23:58 0:10:48 smithi main rhel 8.6 orch:cephadm/smoke-roleless/{0-distro/rhel_8.6_container_tools_3.0 0-nvme-loop 1-start 2-services/mirror 3-final} 2
Failure Reason:

reached maximum tries (120) after waiting for 120 seconds

fail 7116870 2022-12-14 21:17:12 2022-12-15 04:46:31 2022-12-15 05:11:56 0:25:25 0:15:15 0:10:10 smithi main rhel 8.6 orch:cephadm/smoke/{0-distro/rhel_8.6_container_tools_3.0 0-nvme-loop agent/on fixed-2 mon_election/connectivity start} 2
Failure Reason:

Command failed on smithi036 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:5c68bf97e405dbc8011a44bde3209055716f1129 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 3237d448-7c36-11ed-8443-001a4aab830c -- ceph orch device zap smithi036 /dev/nvme4n1 --force'

pass 7116871 2022-12-14 21:17:18 2022-12-15 04:47:00 2022-12-15 05:04:07 0:17:07 0:06:14 0:10:53 smithi main ubuntu 20.04 orch:cephadm/workunits/{0-distro/ubuntu_20.04 agent/on mon_election/connectivity task/test_cephadm_repos} 1
fail 7116872 2022-12-14 21:17:34 2022-12-15 04:47:26 2022-12-15 05:19:05 0:31:39 0:22:50 0:08:49 smithi main rhel 8.6 orch:cephadm/smoke-roleless/{0-distro/rhel_8.6_container_tools_rhel8 0-nvme-loop 1-start 2-services/nfs-ingress-rgw-bucket 3-final} 2
Failure Reason:

reached maximum tries (120) after waiting for 120 seconds

fail 7116873 2022-12-14 21:17:50 2022-12-15 04:47:36 2022-12-15 05:18:44 0:31:08 0:18:18 0:12:50 smithi main centos 8.stream orch:cephadm/osds/{0-distro/centos_8.stream_container_tools_crun 0-nvme-loop 1-start 2-ops/rm-zap-flag} 2
Failure Reason:

reached maximum tries (120) after waiting for 120 seconds

fail 7116874 2022-12-14 21:17:56 2022-12-15 04:48:11 2022-12-15 05:13:32 0:25:21 0:14:33 0:10:48 smithi main centos 8.stream orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} fail_fs/no overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn syntax} roles tasks/{0-from/pacific 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-inline/no 4-verify} 2-client 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} 2
Failure Reason:

Command failed on smithi040 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:pacific shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 6913312e-7c36-11ed-8443-001a4aab830c -- ceph orch device zap smithi040 /dev/vg_nvme/lv_4 --force'

fail 7116875 2022-12-14 21:18:07 2022-12-15 04:48:37 2022-12-15 05:21:25 0:32:48 0:21:52 0:10:56 smithi main rhel 8.6 orch:cephadm/with-work/{0-distro/rhel_8.6_container_tools_3.0 fixed-2 mode/root mon_election/connectivity msgr/async-v2only start tasks/rados_python} 2
Failure Reason:

Command failed on smithi007 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:5c68bf97e405dbc8011a44bde3209055716f1129 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 8c9e6bc6-7c37-11ed-8443-001a4aab830c -- ceph orch device zap smithi007 /dev/vg_nvme/lv_4 --force'

fail 7116876 2022-12-14 21:18:10 2022-12-15 04:48:43 2022-12-15 05:20:49 0:32:06 0:22:04 0:10:02 smithi main rhel 8.6 orch:cephadm/thrash/{0-distro/rhel_8.6_container_tools_3.0 1-start 2-thrash 3-tasks/snaps-few-objects fixed-2 msgr/async-v2only root} 2
Failure Reason:

Command failed on smithi033 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:5c68bf97e405dbc8011a44bde3209055716f1129 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 937e9178-7c37-11ed-8443-001a4aab830c -- ceph orch device zap smithi033 /dev/vg_nvme/lv_4 --force'

fail 7116877 2022-12-14 21:18:16 2022-12-15 04:49:03 2022-12-15 12:48:19 7:59:16 7:42:41 0:16:35 smithi main ubuntu 20.04 orch:cephadm/smoke-roleless/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-services/nfs-ingress-rgw-user 3-final} 2
Failure Reason:

Cannot connect to remote host smithi136

fail 7116878 2022-12-14 21:18:27 2022-12-15 04:50:49 2022-12-15 05:17:35 0:26:46 0:13:47 0:12:59 smithi main rhel 8.6 orch:cephadm/smoke/{0-distro/rhel_8.6_container_tools_rhel8 0-nvme-loop agent/off fixed-2 mon_election/classic start} 2
Failure Reason:

Command failed on smithi039 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:5c68bf97e405dbc8011a44bde3209055716f1129 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid e1ab4c2a-7c36-11ed-8443-001a4aab830c -- ceph orch device zap smithi039 /dev/nvme4n1 --force'

fail 7116879 2022-12-14 21:18:33 2022-12-15 04:51:25 2022-12-15 05:14:12 0:22:47 0:13:10 0:09:37 smithi main centos 8.stream orch:cephadm/workunits/{0-distro/centos_8.stream_container_tools agent/off mon_election/classic task/test_iscsi_pids_limit/{centos_8.stream_container_tools test_iscsi_pids_limit}} 1
Failure Reason:

Command failed on smithi120 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:5c68bf97e405dbc8011a44bde3209055716f1129 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid a1d6280e-7c36-11ed-8443-001a4aab830c -- ceph orch device zap smithi120 /dev/vg_nvme/lv_4 --force'

fail 7116880 2022-12-14 21:18:40 2022-12-15 04:51:46 2022-12-15 05:22:45 0:30:59 0:18:43 0:12:16 smithi main centos 8.stream orch:cephadm/smoke-roleless/{0-distro/centos_8.stream_container_tools 0-nvme-loop 1-start 2-services/nfs-ingress 3-final} 2
Failure Reason:

reached maximum tries (120) after waiting for 120 seconds

fail 7116881 2022-12-14 21:18:46 2022-12-15 04:52:25 2022-12-15 05:14:01 0:21:36 0:10:30 0:11:06 smithi main centos 8.stream orch:cephadm/upgrade/{1-start-distro/1-start-centos_8.stream_container-tools 2-repo_digest/defaut 3-upgrade/staggered 4-wait 5-upgrade-ls agent/off mon_election/classic} 2
Failure Reason:

Command failed on smithi037 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v16.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid a1c750ea-7c36-11ed-8443-001a4aab830c -- ceph orch daemon add osd smithi037:vg_nvme/lv_4'

fail 7116882 2022-12-14 21:19:02 2022-12-15 04:52:36 2022-12-15 05:24:20 0:31:44 0:21:49 0:09:55 smithi main rhel 8.6 orch:cephadm/osds/{0-distro/rhel_8.6_container_tools_3.0 0-nvme-loop 1-start 2-ops/rm-zap-wait} 2
Failure Reason:

reached maximum tries (120) after waiting for 120 seconds

fail 7116883 2022-12-14 21:19:03 2022-12-15 04:52:37 2022-12-15 05:23:33 0:30:56 0:18:24 0:12:32 smithi main centos 8.stream orch:cephadm/smoke-roleless/{0-distro/centos_8.stream_container_tools_crun 0-nvme-loop 1-start 2-services/nfs-ingress2 3-final} 2
Failure Reason:

reached maximum tries (120) after waiting for 120 seconds

fail 7116884 2022-12-14 21:19:09 2022-12-15 04:54:30 2022-12-15 05:21:25 0:26:55 0:18:10 0:08:45 smithi main rhel 8.6 orch:cephadm/with-work/{0-distro/rhel_8.6_container_tools_rhel8 fixed-2 mode/packaged mon_election/classic msgr/async start tasks/rotate-keys} 2
Failure Reason:

Command failed on smithi098 with status 22: 'sudo cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:5c68bf97e405dbc8011a44bde3209055716f1129 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid ac4c08d4-7c37-11ed-8443-001a4aab830c -- ceph orch device zap smithi098 /dev/vg_nvme/lv_4 --force'

fail 7116885 2022-12-14 21:19:14 2022-12-15 04:54:31 2022-12-15 05:15:37 0:21:06 0:12:49 0:08:17 smithi main centos 8.stream orch:cephadm/workunits/{0-distro/centos_8.stream_container_tools_crun agent/on mon_election/connectivity task/test_nfs} 1
Failure Reason:

Command failed on smithi099 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:5c68bf97e405dbc8011a44bde3209055716f1129 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 0f225e64-7c37-11ed-8443-001a4aab830c -- ceph orch device zap smithi099 /dev/vg_nvme/lv_4 --force'

fail 7116886 2022-12-14 21:19:24 2022-12-15 04:55:18 2022-12-15 05:28:58 0:33:40 0:20:05 0:13:35 smithi main rhel 8.6 orch:cephadm/thrash/{0-distro/rhel_8.6_container_tools_rhel8 1-start 2-thrash 3-tasks/rados_api_tests fixed-2 msgr/async root} 2
Failure Reason:

Command failed on smithi006 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:5c68bf97e405dbc8011a44bde3209055716f1129 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 779dcfae-7c38-11ed-8443-001a4aab830c -- ceph orch device zap smithi006 /dev/vg_nvme/lv_4 --force'

fail 7116887 2022-12-14 21:19:30 2022-12-15 04:58:03 2022-12-15 05:19:23 0:21:20 0:10:46 0:10:34 smithi main centos 8.stream orch:cephadm/mgr-nfs-upgrade/{0-centos_8.stream_container_tools 1-bootstrap/16.2.4 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
Failure Reason:

Command failed on smithi031 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v16.2.4 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 5bf1248c-7c37-11ed-8443-001a4aab830c -- ceph orch daemon add osd smithi031:vg_nvme/lv_4'

fail 7116888 2022-12-14 21:19:37 2022-12-15 04:58:44 2022-12-15 05:24:03 0:25:19 0:09:26 0:15:53 smithi main ubuntu 20.04 orch:cephadm/smoke/{0-distro/ubuntu_20.04 0-nvme-loop agent/on fixed-2 mon_election/connectivity start} 2
Failure Reason:

Command failed on smithi102 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:5c68bf97e405dbc8011a44bde3209055716f1129 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid b6b6886c-7c37-11ed-8443-001a4aab830c -- ceph orch device zap smithi102 /dev/nvme4n1 --force'

fail 7116889 2022-12-14 21:19:48 2022-12-15 04:59:55 2022-12-15 05:34:12 0:34:17 0:21:14 0:13:03 smithi main rhel 8.6 orch:cephadm/smoke-roleless/{0-distro/rhel_8.6_container_tools_3.0 0-nvme-loop 1-start 2-services/nfs 3-final} 2
Failure Reason:

reached maximum tries (120) after waiting for 120 seconds

fail 7116890 2022-12-14 21:19:52 2022-12-15 05:00:35 2022-12-15 05:31:14 0:30:39 0:19:50 0:10:49 smithi main rhel 8.6 orch:cephadm/smoke-roleless/{0-distro/rhel_8.6_container_tools_rhel8 0-nvme-loop 1-start 2-services/nfs2 3-final} 2
Failure Reason:

reached maximum tries (120) after waiting for 120 seconds

fail 7116891 2022-12-14 21:19:57 2022-12-15 05:01:05 2022-12-15 05:31:02 0:29:57 0:20:35 0:09:22 smithi main rhel 8.6 orch:cephadm/osds/{0-distro/rhel_8.6_container_tools_rhel8 0-nvme-loop 1-start 2-ops/rmdir-reactivate} 2
Failure Reason:

reached maximum tries (120) after waiting for 120 seconds

dead 7116892 2022-12-14 21:20:02 2022-12-15 05:01:05 2022-12-15 05:13:08 0:12:03 0:04:20 0:07:43 smithi main rhel 8.6 orch:cephadm/workunits/{0-distro/rhel_8.6_container_tools_3.0 agent/off mon_election/classic task/test_orch_cli} 1
Failure Reason:

{'smithi089.front.sepia.ceph.com': {'_ansible_no_log': False, 'changed': False, 'invocation': {'module_args': {'allow_downgrade': False, 'allowerasing': False, 'autoremove': False, 'bugfix': False, 'conf_file': None, 'disable_excludes': None, 'disable_gpg_check': True, 'disable_plugin': [], 'disablerepo': [], 'download_dir': None, 'download_only': False, 'enable_plugin': [], 'enablerepo': [], 'exclude': [], 'install_repoquery': True, 'install_weak_deps': True, 'installroot': '/', 'list': None, 'lock_timeout': 30, 'name': ['http://satellite.front.sepia.ceph.com/pub/katello-ca-consumer-latest.noarch.rpm'], 'releasever': None, 'security': False, 'skip_broken': False, 'state': 'present', 'update_cache': False, 'update_only': False, 'validate_certs': False}}, 'msg': 'Failure downloading http://satellite.front.sepia.ceph.com/pub/katello-ca-consumer-latest.noarch.rpm, Request failed: <urlopen error [Errno 111] Connection refused>'}}

fail 7116893 2022-12-14 21:20:03 2022-12-15 05:01:06 2022-12-15 05:30:31 0:29:25 0:12:23 0:17:02 smithi main ubuntu 20.04 orch:cephadm/with-work/{0-distro/ubuntu_20.04 fixed-2 mode/root mon_election/connectivity msgr/async-v1only start tasks/rados_api_tests} 2
Failure Reason:

Command failed on smithi090 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:5c68bf97e405dbc8011a44bde3209055716f1129 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 70e8bc32-7c38-11ed-8443-001a4aab830c -- ceph orch device zap smithi090 /dev/vg_nvme/lv_4 --force'

fail 7116894 2022-12-14 21:20:09 2022-12-15 05:01:55 2022-12-15 05:44:03 0:42:08 0:29:31 0:12:37 smithi main ubuntu 20.04 orch:cephadm/smoke-roleless/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-services/rgw-ingress 3-final} 2
Failure Reason:

reached maximum tries (120) after waiting for 120 seconds

fail 7116895 2022-12-14 21:20:25 2022-12-15 05:02:26 2022-12-15 05:26:26 0:24:00 0:14:23 0:09:37 smithi main centos 8.stream orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} fail_fs/no overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn syntax} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-inline/no 4-verify} 2-client 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} 2
Failure Reason:

Command failed on smithi060 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v16.2.4 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 32c27cb8-7c38-11ed-8443-001a4aab830c -- ceph orch daemon add osd smithi060:vg_nvme/lv_4'

fail 7116896 2022-12-14 21:20:31 2022-12-15 05:02:27 2022-12-15 05:28:14 0:25:47 0:12:19 0:13:28 smithi main ubuntu 20.04 orch:cephadm/thrash/{0-distro/ubuntu_20.04 1-start 2-thrash 3-tasks/radosbench fixed-2 msgr/async-v1only root} 2
Failure Reason:

Command failed on smithi158 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:5c68bf97e405dbc8011a44bde3209055716f1129 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 5c23b98c-7c38-11ed-8443-001a4aab830c -- ceph orch device zap smithi158 /dev/vg_nvme/lv_4 --force'

pass 7116897 2022-12-14 21:20:36 2022-12-15 05:02:27 2022-12-15 05:29:09 0:26:42 0:17:54 0:08:48 smithi main rhel 8.6 orch:cephadm/orchestrator_cli/{0-random-distro$/{rhel_8.6_container_tools_3.0} 2-node-mgr agent/on orchestrator_cli} 2
fail 7116900 2022-12-14 21:20:39 2022-12-15 05:02:27 2022-12-15 05:24:37 0:22:10 0:10:41 0:11:29 smithi main centos 8.stream orch:cephadm/smoke/{0-distro/centos_8.stream_container_tools 0-nvme-loop agent/on fixed-2 mon_election/classic start} 2
Failure Reason:

Command failed on smithi111 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:5c68bf97e405dbc8011a44bde3209055716f1129 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 13eefce4-7c38-11ed-8443-001a4aab830c -- ceph orch device zap smithi111 /dev/nvme4n1 --force'

fail 7116902 2022-12-14 21:20:45 2022-12-15 05:02:48 2022-12-15 05:25:56 0:23:08 0:12:27 0:10:41 smithi main rhel 8.6 orch:cephadm/smoke-singlehost/{0-random-distro$/{rhel_8.6_container_tools_3.0} 1-start 2-services/rgw 3-final} 1
Failure Reason:

Command failed on smithi203 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:5c68bf97e405dbc8011a44bde3209055716f1129 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 1f219e8c-7c38-11ed-8443-001a4aab830c -- ceph orch device zap smithi203 /dev/vg_nvme/lv_4 --force'

fail 7116903 2022-12-14 21:20:51 2022-12-15 05:02:48 2022-12-15 05:34:33 0:31:45 0:18:30 0:13:15 smithi main centos 8.stream orch:cephadm/smoke-roleless/{0-distro/centos_8.stream_container_tools 0-nvme-loop 1-start 2-services/rgw 3-final} 2
Failure Reason:

reached maximum tries (120) after waiting for 120 seconds

fail 7116904 2022-12-14 21:20:57 2022-12-15 05:05:32 2022-12-15 05:27:39 0:22:07 0:08:37 0:13:30 smithi main ubuntu 20.04 orch:cephadm/upgrade/{1-start-distro/1-start-ubuntu_20.04 2-repo_digest/repo_digest 3-upgrade/simple 4-wait 5-upgrade-ls agent/on mon_election/connectivity} 2
Failure Reason:

Command failed on smithi035 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v16.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 5e5ebd0a-7c38-11ed-8443-001a4aab830c -- ceph orch daemon add osd smithi035:vg_nvme/lv_4'

fail 7116906 2022-12-14 21:21:03 2022-12-15 05:25:19 2022-12-15 05:57:52 0:32:33 0:18:52 0:13:41 smithi main rhel 8.6 orch:cephadm/workunits/{0-distro/rhel_8.6_container_tools_rhel8 agent/on mon_election/connectivity task/test_orch_cli_mon} 5
Failure Reason:

Command failed on smithi012 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:5c68bf97e405dbc8011a44bde3209055716f1129 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 641030b8-7c3c-11ed-8443-001a4aab830c -- ceph orch device zap smithi012 /dev/vg_nvme/lv_4 --force'

fail 7116908 2022-12-14 21:21:13 2022-12-15 05:59:27 1251 smithi main rhel 8.6 orch:cephadm/osds/{0-distro/rhel_8.6_container_tools_rhel8 0-nvme-loop 1-start 2-ops/repave-all} 2
Failure Reason:

reached maximum tries (120) after waiting for 120 seconds

fail 7116911 2022-12-14 21:21:15 2022-12-15 05:27:24 2022-12-15 05:56:13 0:28:49 0:19:50 0:08:59 smithi main centos 8.stream orch:cephadm/smoke-roleless/{0-distro/centos_8.stream_container_tools_crun 0-nvme-loop 1-start 2-services/basic 3-final} 2
Failure Reason:

reached maximum tries (120) after waiting for 120 seconds

fail 7116913 2022-12-14 21:21:21 2022-12-15 05:27:57 2022-12-15 05:51:38 0:23:41 0:14:10 0:09:31 smithi main centos 8.stream orch:cephadm/with-work/{0-distro/centos_8.stream_container_tools fixed-2 mode/packaged mon_election/classic msgr/async-v2only start tasks/rados_python} 2
Failure Reason:

Command failed on smithi158 with status 22: 'sudo cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:5c68bf97e405dbc8011a44bde3209055716f1129 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid dac43bd8-7c3b-11ed-8443-001a4aab830c -- ceph orch device zap smithi158 /dev/vg_nvme/lv_4 --force'

fail 7116915 2022-12-14 21:21:37 2022-12-15 05:28:25 2022-12-15 05:51:16 0:22:51 0:11:10 0:11:41 smithi main centos 8.stream orch:cephadm/smoke/{0-distro/centos_8.stream_container_tools_crun 0-nvme-loop agent/off fixed-2 mon_election/connectivity start} 2
Failure Reason:

Command failed on smithi006 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:5c68bf97e405dbc8011a44bde3209055716f1129 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid e16ff9a4-7c3b-11ed-8443-001a4aab830c -- ceph orch device zap smithi006 /dev/nvme4n1 --force'

fail 7116917 2022-12-14 21:21:43 2022-12-15 05:29:30 2022-12-15 06:53:35 1:24:05 1:10:40 0:13:25 smithi main rhel 8.6 orch:cephadm/smoke-roleless/{0-distro/rhel_8.6_container_tools_3.0 0-nvme-loop 1-start 2-services/client-keyring 3-final} 2
Failure Reason:

reached maximum tries (120) after waiting for 120 seconds

fail 7116919 2022-12-14 21:21:54 2022-12-15 05:56:30 805 smithi main centos 8.stream orch:cephadm/thrash/{0-distro/centos_8.stream_container_tools 1-start 2-thrash 3-tasks/small-objects fixed-2 msgr/async-v2only root} 2
Failure Reason:

Command failed on smithi191 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:5c68bf97e405dbc8011a44bde3209055716f1129 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 84938696-7c3c-11ed-8443-001a4aab830c -- ceph orch device zap smithi191 /dev/vg_nvme/lv_4 --force'

fail 7116921 2022-12-14 21:21:55 2022-12-15 05:31:53 2022-12-15 05:53:12 0:21:19 0:10:57 0:10:22 smithi main centos 8.stream orch:cephadm/mgr-nfs-upgrade/{0-centos_8.stream_container_tools 1-bootstrap/16.2.5 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
Failure Reason:

Command failed on smithi134 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v16.2.5 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 01090cc4-7c3c-11ed-8443-001a4aab830c -- ceph orch daemon add osd smithi134:vg_nvme/lv_4'

fail 7116923 2022-12-14 21:22:07 2022-12-15 05:59:27 846 smithi main centos 8.stream orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} fail_fs/yes overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn syntax} roles tasks/{0-from/pacific 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-inline/yes 4-verify} 2-client 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} 2
Failure Reason:

Command failed on smithi090 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:pacific shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid c64e89f0-7c3c-11ed-8443-001a4aab830c -- ceph orch device zap smithi090 /dev/vg_nvme/lv_4 --force'

pass 7116925 2022-12-14 21:22:18 2022-12-15 05:32:27 2022-12-15 05:53:33 0:21:06 0:10:00 0:11:06 smithi main ubuntu 20.04 orch:cephadm/workunits/{0-distro/ubuntu_20.04 agent/off mon_election/classic task/test_adoption} 1
fail 7116927 2022-12-14 21:22:33 2022-12-15 05:32:51 2022-12-15 06:05:31 0:32:40 0:20:55 0:11:45 smithi main rhel 8.6 orch:cephadm/smoke-roleless/{0-distro/rhel_8.6_container_tools_rhel8 0-nvme-loop 1-start 2-services/iscsi 3-final} 2
Failure Reason:

reached maximum tries (120) after waiting for 120 seconds

fail 7116929 2022-12-14 21:22:39 2022-12-15 05:33:16 2022-12-15 06:19:09 0:45:53 0:31:04 0:14:49 smithi main ubuntu 20.04 orch:cephadm/osds/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-ops/rm-zap-add} 2
Failure Reason:

reached maximum tries (120) after waiting for 120 seconds

dead 7116930 2022-12-14 21:22:48 2022-12-15 05:35:12 2022-12-15 12:46:19 7:11:07 smithi main ubuntu 20.04 orch:cephadm/smoke-roleless/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-services/jaeger 3-final} 2
fail 7116931 2022-12-14 21:22:54 2022-12-15 05:36:17 2022-12-15 05:59:45 0:23:28 0:14:16 0:09:12 smithi main centos 8.stream orch:cephadm/with-work/{0-distro/centos_8.stream_container_tools_crun fixed-2 mode/root mon_election/connectivity msgr/async start tasks/rotate-keys} 2
Failure Reason:

Command failed on smithi008 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:5c68bf97e405dbc8011a44bde3209055716f1129 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 1aaa128a-7c3d-11ed-8443-001a4aab830c -- ceph orch device zap smithi008 /dev/vg_nvme/lv_4 --force'

fail 7116933 2022-12-14 21:23:06 2022-12-15 05:36:53 2022-12-15 06:01:18 0:24:25 0:12:21 0:12:04 smithi main rhel 8.6 orch:cephadm/smoke/{0-distro/rhel_8.6_container_tools_3.0 0-nvme-loop agent/on fixed-2 mon_election/classic start} 2
Failure Reason:

Command failed on smithi085 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:5c68bf97e405dbc8011a44bde3209055716f1129 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid feb3a9ba-7c3c-11ed-8443-001a4aab830c -- ceph orch device zap smithi085 /dev/nvme4n1 --force'

fail 7116935 2022-12-14 21:23:13 2022-12-15 05:37:18 2022-12-15 06:04:37 0:27:19 0:17:32 0:09:47 smithi main centos 8.stream orch:cephadm/workunits/{0-distro/centos_8.stream_container_tools agent/on mon_election/connectivity task/test_cephadm} 1
Failure Reason:

SELinux denials found on ubuntu@smithi165.front.sepia.ceph.com: ['type=AVC msg=audit(1671084079.953:19386): avc: denied { ioctl } for pid=123646 comm="iptables" path="/var/lib/containers/storage/overlay/d0ca76117fc80a7b531f21cca05db25a39d49a4662a752e2a15fd9ead9d917d4/merged" dev="overlay" ino=3409987 scontext=system_u:system_r:iptables_t:s0 tcontext=system_u:object_r:container_file_t:s0:c1022,c1023 tclass=dir permissive=1', 'type=AVC msg=audit(1671084080.066:19389): avc: denied { ioctl } for pid=123662 comm="iptables" path="/var/lib/containers/storage/overlay/d0ca76117fc80a7b531f21cca05db25a39d49a4662a752e2a15fd9ead9d917d4/merged" dev="overlay" ino=3409987 scontext=system_u:system_r:iptables_t:s0 tcontext=system_u:object_r:container_file_t:s0:c1022,c1023 tclass=dir permissive=1']

fail 7116937 2022-12-14 21:23:25 2022-12-15 05:37:18 2022-12-15 06:01:33 0:24:15 0:14:23 0:09:52 smithi main centos 8.stream orch:cephadm/thrash/{0-distro/centos_8.stream_container_tools_crun 1-start 2-thrash 3-tasks/snaps-few-objects fixed-2 msgr/async root} 2
Failure Reason:

Command failed on smithi038 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:5c68bf97e405dbc8011a44bde3209055716f1129 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 5179c99a-7c3d-11ed-8443-001a4aab830c -- ceph orch device zap smithi038 /dev/vg_nvme/lv_4 --force'

fail 7116939 2022-12-14 21:23:31 2022-12-15 05:37:19 2022-12-15 06:07:52 0:30:33 0:18:54 0:11:39 smithi main centos 8.stream orch:cephadm/smoke-roleless/{0-distro/centos_8.stream_container_tools 0-nvme-loop 1-start 2-services/mirror 3-final} 2
Failure Reason:

reached maximum tries (120) after waiting for 120 seconds

fail 7116941 2022-12-14 21:23:37 2022-12-15 05:37:34 2022-12-15 06:03:46 0:26:12 0:14:22 0:11:50 smithi main centos 8.stream orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} fail_fs/no overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn syntax} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-inline/no 4-verify} 2-client 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} 2
Failure Reason:

Command failed on smithi106 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v16.2.4 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 4fee4934-7c3d-11ed-8443-001a4aab830c -- ceph orch daemon add osd smithi106:vg_nvme/lv_4'

fail 7116943 2022-12-14 21:23:53 2022-12-15 05:37:35 2022-12-15 05:59:27 0:21:52 0:09:57 0:11:55 smithi main centos 8.stream orch:cephadm/upgrade/{1-start-distro/1-start-centos_8.stream_container-tools 2-repo_digest/defaut 3-upgrade/staggered 4-wait 5-upgrade-ls agent/on mon_election/classic} 2
Failure Reason:

Command failed on smithi082 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v16.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid c7af93e8-7c3c-11ed-8443-001a4aab830c -- ceph orch daemon add osd smithi082:vg_nvme/lv_4'

fail 7116945 2022-12-14 21:24:05 2022-12-15 05:37:45 2022-12-15 06:07:52 0:30:07 0:18:59 0:11:08 smithi main centos 8.stream orch:cephadm/smoke-roleless/{0-distro/centos_8.stream_container_tools_crun 0-nvme-loop 1-start 2-services/nfs-ingress-rgw-bucket 3-final} 2
Failure Reason:

reached maximum tries (120) after waiting for 120 seconds

fail 7116947 2022-12-14 21:24:16 2022-12-15 05:37:56 2022-12-15 06:09:26 0:31:30 0:19:16 0:12:14 smithi main centos 8.stream orch:cephadm/osds/{0-distro/centos_8.stream_container_tools 0-nvme-loop 1-start 2-ops/rm-zap-flag} 2
Failure Reason:

reached maximum tries (120) after waiting for 120 seconds