Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
dead 7249127 2023-04-23 18:59:28 2023-04-24 11:05:27 2023-04-24 11:09:12 0:03:45 smithi main ubuntu 18.04 orch/cephadm/smoke-roleless/{0-distro/ubuntu_18.04 0-nvme-loop 1-start 2-services/client-keyring 3-final} 2
Failure Reason:

Error reimaging machines: Failed to power on smithi163

dead 7249128 2023-04-23 18:59:29 2023-04-24 11:06:30 2023-04-24 11:37:47 0:31:17 smithi main centos 8.stream orch/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-inline/yes 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

Error reimaging machines: reached maximum tries (100) after waiting for 600 seconds

dead 7249129 2023-04-23 18:59:30 2023-04-24 11:06:31 2023-04-24 11:21:28 0:14:57 0:04:06 0:10:51 smithi main ubuntu 20.04 orch/cephadm/smoke/{0-nvme-loop distro/ubuntu_20.04 fixed-2 mon_election/connectivity start} 2
Failure Reason:

{'smithi167.front.sepia.ceph.com': {'changed': False, 'msg': 'All items completed', 'results': [{'_ansible_item_label': {'key': 'vg_nvme', 'value': {'pvs': '/dev/nvme0n1'}}, '_ansible_no_log': False, 'ansible_loop_var': 'item', 'changed': False, 'err': " /dev/vg_nvme: already exists in filesystem\n Run `vgcreate --help' for more information.\n", 'failed': True, 'invocation': {'module_args': {'force': False, 'pesize': '4', 'pv_options': '', 'pvresize': False, 'pvs': ['/dev/nvme0n1'], 'state': 'present', 'vg': 'vg_nvme', 'vg_options': ''}}, 'item': {'key': 'vg_nvme', 'value': {'pvs': '/dev/nvme0n1'}}, 'msg': "Creating volume group 'vg_nvme' failed", 'rc': 3}]}, 'smithi039.front.sepia.ceph.com': {'changed': False, 'msg': 'All items completed', 'results': [{'_ansible_item_label': {'key': 'vg_nvme', 'value': {'pvs': '/dev/nvme0n1'}}, '_ansible_no_log': False, 'ansible_loop_var': 'item', 'changed': False, 'err': " /dev/vg_nvme: already exists in filesystem\n Run `vgcreate --help' for more information.\n", 'failed': True, 'invocation': {'module_args': {'force': False, 'pesize': '4', 'pv_options': '', 'pvresize': False, 'pvs': ['/dev/nvme0n1'], 'state': 'present', 'vg': 'vg_nvme', 'vg_options': ''}}, 'item': {'key': 'vg_nvme', 'value': {'pvs': '/dev/nvme0n1'}}, 'msg': "Creating volume group 'vg_nvme' failed", 'rc': 3}]}}

dead 7249130 2023-04-23 18:59:30 2023-04-24 11:06:31 2023-04-24 12:22:15 1:15:44 0:39:09 0:36:35 smithi main centos 8.stream orch/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/connectivity task/test_orch_cli_mon} 5
Failure Reason:

{'smithi130.front.sepia.ceph.com': {'_ansible_no_log': False, 'changed': False, 'invocation': {'module_args': {'allow_downgrade': False, 'allowerasing': False, 'autoremove': False, 'bugfix': False, 'conf_file': None, 'disable_excludes': None, 'disable_gpg_check': False, 'disable_plugin': [], 'disablerepo': [], 'download_dir': None, 'download_only': False, 'enable_plugin': [], 'enablerepo': [], 'exclude': [], 'install_repoquery': True, 'install_weak_deps': True, 'installroot': '/', 'list': None, 'lock_timeout': 30, 'name': ['krb5-workstation'], 'releasever': None, 'security': False, 'skip_broken': False, 'state': 'present', 'update_cache': False, 'update_only': False, 'validate_certs': True}}, 'msg': "Failed to download metadata for repo 'CentOS-PowerTools': Yum repo downloading error: Downloading error(s): repodata/655edd281b923f12af44ba71f23c34741aa8dae0a29e712e75d96be8787f7115-modules.yaml.xz - Cannot download, all mirrors were already tried without success", 'rc': 1, 'results': []}}

pass 7249131 2023-04-23 18:59:31 2023-04-24 11:10:03 2023-04-24 11:59:20 0:49:17 0:35:49 0:13:28 smithi main ubuntu 18.04 orch/cephadm/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/mimic backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{ubuntu_18.04} mon_election/classic msgr-failures/few rados thrashers/default thrashosds-health workloads/rbd_cls} 3
pass 7249132 2023-04-23 18:59:32 2023-04-24 11:10:24 2023-04-24 11:45:28 0:35:04 0:18:46 0:16:18 smithi main ubuntu 20.04 orch/cephadm/smoke-roleless/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-services/iscsi 3-final} 2
pass 7249133 2023-04-23 18:59:33 2023-04-24 11:16:02 2023-04-24 12:36:08 1:20:06 0:45:58 0:34:08 smithi main centos 8.stream orch/cephadm/osds/{0-distro/centos_8.stream_container_tools 0-nvme-loop 1-start 2-ops/rm-zap-add} 2
fail 7249134 2023-04-23 18:59:34 2023-04-24 11:18:41 2023-04-24 13:53:24 2:34:43 2:01:35 0:33:08 smithi main centos 8.stream orch/cephadm/dashboard/{0-distro/centos_8.stream_container_tools task/test_e2e} 2
Failure Reason:

Command failed (workunit test cephadm/test_dashboard_e2e.sh) on smithi083 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=ad9576de6cbdb0047be604610f5b00c42ad65335 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_dashboard_e2e.sh'

pass 7249135 2023-04-23 18:59:34 2023-04-24 11:19:23 2023-04-24 13:46:58 2:27:35 1:55:42 0:31:53 smithi main centos 8.stream orch/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-inline/no 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
pass 7249136 2023-04-23 18:59:35 2023-04-24 11:19:24 2023-04-24 12:57:24 1:38:00 1:03:05 0:34:55 smithi main centos 8.stream orch/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.stream_container_tools 1-bootstrap/16.2.4 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
pass 7249137 2023-04-23 18:59:36 2023-04-24 11:22:05 2023-04-24 11:58:05 0:36:00 0:26:05 0:09:55 smithi main rhel 8.4 orch/cephadm/orchestrator_cli/{0-random-distro$/{rhel_8.4_container_tools_3.0} 2-node-mgr orchestrator_cli} 2
dead 7249138 2023-04-23 18:59:37 2023-04-24 11:22:05 2023-04-24 11:39:54 0:17:49 smithi main centos 8.stream orch/cephadm/rbd_iscsi/{base/install cluster/{fixed-3 openstack} pool/datapool supported-random-distro$/{centos_8} workloads/ceph_iscsi} 3
Failure Reason:

SSH connection to smithi167 was lost: 'sudo yum install -y kernel'

pass 7249139 2023-04-23 18:59:37 2023-04-24 11:22:10 2023-04-24 12:45:24 1:23:14 0:51:58 0:31:16 smithi main centos 8.stream orch/cephadm/smoke/{0-nvme-loop distro/centos_8.stream_container_tools fixed-2 mon_election/classic start} 2
pass 7249140 2023-04-23 18:59:38 2023-04-24 11:22:21 2023-04-24 12:42:46 1:20:25 0:46:04 0:34:21 smithi main centos 8.stream orch/cephadm/smoke-roleless/{0-distro/centos_8.stream_container_tools 0-nvme-loop 1-start 2-services/mirror 3-final} 2
pass 7249141 2023-04-23 18:59:39 2023-04-24 11:25:12 2023-04-24 11:59:45 0:34:33 0:16:36 0:17:57 smithi main ubuntu 20.04 orch/cephadm/smoke-singlehost/{0-distro$/{ubuntu_20.04} 1-start 2-services/basic 3-final} 1
dead 7249142 2023-04-23 18:59:40 2023-04-24 11:30:09 2023-04-24 11:33:18 0:03:09 smithi main centos 8.stream orch/cephadm/thrash/{0-distro/centos_8.stream_container_tools 1-start 2-thrash 3-tasks/small-objects fixed-2 msgr/async-v1only root} 2
Failure Reason:

Error reimaging machines: Failed to power on smithi072

dead 7249143 2023-04-23 18:59:40 2023-04-24 11:30:44 2023-04-24 11:40:23 0:09:39 0:02:02 0:07:37 smithi main ubuntu 20.04 orch/cephadm/upgrade/{1-start-distro/1-start-ubuntu_20.04 2-repo_digest/repo_digest 3-upgrade/staggered 4-wait 5-upgrade-ls mon_election/classic} 2
Failure Reason:

{'smithi150.front.sepia.ceph.com': {'changed': False, 'msg': 'All items completed', 'results': [{'_ansible_item_label': {'key': 'vg_nvme', 'value': {'pvs': '/dev/nvme0n1'}}, '_ansible_no_log': False, 'ansible_loop_var': 'item', 'changed': False, 'err': " /dev/vg_nvme: already exists in filesystem\n Run `vgcreate --help' for more information.\n", 'failed': True, 'invocation': {'module_args': {'force': False, 'pesize': '4', 'pv_options': '', 'pvresize': False, 'pvs': ['/dev/nvme0n1'], 'state': 'present', 'vg': 'vg_nvme', 'vg_options': ''}}, 'item': {'key': 'vg_nvme', 'value': {'pvs': '/dev/nvme0n1'}}, 'msg': "Creating volume group 'vg_nvme' failed", 'rc': 3}]}, 'smithi047.front.sepia.ceph.com': {'changed': False, 'msg': 'All items completed', 'results': [{'_ansible_item_label': {'key': 'vg_nvme', 'value': {'pvs': '/dev/nvme0n1'}}, '_ansible_no_log': False, 'ansible_loop_var': 'item', 'changed': False, 'err': " /dev/vg_nvme: already exists in filesystem\n Run `vgcreate --help' for more information.\n", 'failed': True, 'invocation': {'module_args': {'force': False, 'pesize': '4', 'pv_options': '', 'pvresize': False, 'pvs': ['/dev/nvme0n1'], 'state': 'present', 'vg': 'vg_nvme', 'vg_options': ''}}, 'item': {'key': 'vg_nvme', 'value': {'pvs': '/dev/nvme0n1'}}, 'msg': "Creating volume group 'vg_nvme' failed", 'rc': 3}]}}

pass 7249144 2023-04-23 18:59:41 2023-04-24 11:31:15 2023-04-24 12:27:46 0:56:31 0:45:21 0:11:10 smithi main rhel 8.4 orch/cephadm/with-work/{0-distro/rhel_8.4_container_tools_3.0 fixed-2 mode/packaged mon_election/classic msgr/async-v1only start tasks/rados_api_tests} 2
pass 7249145 2023-04-23 18:59:42 2023-04-24 11:34:16 2023-04-24 13:42:07 2:07:51 1:36:41 0:31:10 smithi main centos 8.stream orch/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/classic task/test_adoption} 1
pass 7249146 2023-04-23 18:59:43 2023-04-24 11:34:37 2023-04-24 12:41:23 1:06:46 0:53:14 0:13:32 smithi main ubuntu 18.04 orch/cephadm/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/nautilus-v1only backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{ubuntu_18.04} mon_election/connectivity msgr-failures/osd-delay rados thrashers/mapgap thrashosds-health workloads/snaps-few-objects} 3
pass 7249147 2023-04-23 18:59:43 2023-04-24 11:35:02 2023-04-24 12:04:04 0:29:02 0:17:57 0:11:05 smithi main rhel 8.4 orch/cephadm/smoke-roleless/{0-distro/rhel_8.4_container_tools_3.0 0-nvme-loop 1-start 2-services/nfs-ingress-rgw-bucket 3-final} 2
dead 7249148 2023-04-23 18:59:44 2023-04-24 11:37:47 2023-04-24 11:55:20 0:17:33 0:04:31 0:13:02 smithi main centos 8.stream orch/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/connectivity task/test_cephadm} 1
Failure Reason:

{'smithi167.front.sepia.ceph.com': {'_ansible_no_log': False, 'changed': False, 'invocation': {'module_args': {'allow_downgrade': False, 'allowerasing': False, 'autoremove': False, 'bugfix': False, 'conf_file': None, 'disable_excludes': None, 'disable_gpg_check': True, 'disable_plugin': [], 'disablerepo': [], 'download_dir': None, 'download_only': False, 'enable_plugin': [], 'enablerepo': [], 'exclude': [], 'install_repoquery': True, 'install_weak_deps': True, 'installroot': '/', 'list': None, 'lock_timeout': 30, 'name': ['http://satellite.front.sepia.ceph.com/pub/katello-ca-consumer-latest.noarch.rpm'], 'releasever': None, 'security': False, 'skip_broken': False, 'state': 'present', 'update_cache': False, 'update_only': False, 'validate_certs': False}}, 'msg': "Failed to download metadata for repo 'rhel-8-for-x86_64-appstream-rpms': Cannot download repomd.xml: Cannot download repodata/repomd.xml: All mirrors were tried", 'rc': 1, 'results': []}}

fail 7249149 2023-04-23 18:59:45 2023-04-24 11:38:38 2023-04-24 12:01:35 0:22:57 0:12:23 0:10:34 smithi main rhel 8.4 orch/cephadm/smoke/{0-nvme-loop distro/rhel_8.4_container_tools_3.0 fixed-2 mon_election/connectivity start} 2
Failure Reason:

{'smithi167.front.sepia.ceph.com': {'_ansible_no_log': False, 'changed': False, 'cmd': "subscription-manager release --list | grep -E '[0-9]'", 'delta': '0:00:00.435298', 'end': '2023-04-24 11:52:23.142582', 'failed_when_result': True, 'invocation': {'module_args': {'_raw_params': "subscription-manager release --list | grep -E '[0-9]'", '_uses_shell': True, 'argv': None, 'chdir': None, 'creates': None, 'executable': None, 'removes': None, 'stdin': None, 'stdin_add_newline': True, 'strip_empty_ends': True, 'warn': True}}, 'msg': 'non-zero return code', 'rc': 1, 'start': '2023-04-24 11:52:22.707284', 'stderr': 'Network error. Please check the connection details, or see /var/log/rhsm/rhsm.log for more information.', 'stderr_lines': ['Network error. Please check the connection details, or see /var/log/rhsm/rhsm.log for more information.'], 'stdout': '', 'stdout_lines': []}}

fail 7249150 2023-04-23 18:59:46 2023-04-24 11:41:09 2023-04-24 12:01:20 0:20:11 0:12:36 0:07:35 smithi main rhel 8.4 orch/cephadm/osds/{0-distro/rhel_8.4_container_tools_3.0 0-nvme-loop 1-start 2-ops/rm-zap-flag} 2
Failure Reason:

Command failed on smithi186 with status 127: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:ad9576de6cbdb0047be604610f5b00c42ad65335 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 12210784-e297-11ed-9b00-001a4aab830c -- ceph mon dump -f json'

pass 7249151 2023-04-23 18:59:46 2023-04-24 11:41:09 2023-04-24 14:05:59 2:24:50 1:52:42 0:32:08 smithi main centos 8.stream orch/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-inline/yes 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
pass 7249152 2023-04-23 18:59:47 2023-04-24 11:41:15 2023-04-24 12:28:31 0:47:16 0:34:20 0:12:56 smithi main ubuntu 18.04 orch/cephadm/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/nautilus-v2only backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{ubuntu_18.04} mon_election/classic msgr-failures/fastclose rados thrashers/morepggrow thrashosds-health workloads/test_rbd_api} 3
pass 7249153 2023-04-23 18:59:48 2023-04-24 11:41:50 2023-04-24 12:07:21 0:25:31 0:18:07 0:07:24 smithi main rhel 8.4 orch/cephadm/smoke-roleless/{0-distro/rhel_8.4_container_tools_rhel8 0-nvme-loop 1-start 2-services/nfs-ingress-rgw-user 3-final} 2
pass 7249154 2023-04-23 18:59:49 2023-04-24 11:41:51 2023-04-24 12:51:24 1:09:33 0:40:08 0:29:25 smithi main centos 8.stream orch/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/classic task/test_cephadm_repos} 1
pass 7249155 2023-04-23 18:59:49 2023-04-24 11:41:51 2023-04-24 14:14:08 2:32:17 1:58:26 0:33:51 smithi main centos 8.stream orch/cephadm/thrash/{0-distro/centos_8.stream_container_tools 1-start 2-thrash 3-tasks/snaps-few-objects fixed-2 msgr/async-v2only root} 2
fail 7249156 2023-04-23 18:59:50 2023-04-24 11:44:57 2023-04-24 12:04:21 0:19:24 smithi main ubuntu 18.04 orch/rook/smoke/{0-distro/ubuntu_18.04 0-kubeadm 1-rook 2-workload/none 3-final cluster/3-node k8s/1.21 net/calico rook/1.6.2} 3
Failure Reason:

Cannot connect to remote host smithi093

pass 7249157 2023-04-23 18:59:51 2023-04-24 11:45:53 2023-04-24 12:35:58 0:50:05 0:30:10 0:19:55 smithi main rhel 8.4 orch/cephadm/with-work/{0-distro/rhel_8.4_container_tools_rhel8 fixed-2 mode/root mon_election/connectivity msgr/async-v2only start tasks/rados_python} 2
pass 7249158 2023-04-23 18:59:52 2023-04-24 11:59:23 2023-04-24 12:24:41 0:25:18 0:18:21 0:06:57 smithi main rhel 8.4 orch/cephadm/smoke/{0-nvme-loop distro/rhel_8.4_container_tools_rhel8 fixed-2 mon_election/classic start} 2
pass 7249159 2023-04-23 18:59:52 2023-04-24 11:59:24 2023-04-24 12:32:47 0:33:23 0:21:38 0:11:45 smithi main ubuntu 18.04 orch/cephadm/smoke-roleless/{0-distro/ubuntu_18.04 0-nvme-loop 1-start 2-services/nfs-ingress 3-final} 2
pass 7249160 2023-04-23 18:59:53 2023-04-24 12:02:04 2023-04-24 14:11:21 2:09:17 1:38:56 0:30:21 smithi main centos 8.stream orch/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/connectivity task/test_iscsi_pids_limit} 1
pass 7249161 2023-04-23 18:59:54 2023-04-24 12:02:04 2023-04-24 12:46:06 0:44:02 0:34:56 0:09:06 smithi main ubuntu 18.04 orch/cephadm/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/nautilus backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{ubuntu_18.04} mon_election/connectivity msgr-failures/few rados thrashers/none thrashosds-health workloads/cache-snaps} 3
dead 7249162 2023-04-23 18:59:55 2023-04-24 12:02:35 2023-04-25 00:14:31 12:11:56 smithi main centos 8.stream orch/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-inline/no 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

hit max job timeout

pass 7249163 2023-04-23 18:59:55 2023-04-24 12:04:05 2023-04-24 13:31:30 1:27:25 0:57:34 0:29:51 smithi main centos 8.stream orch/cephadm/upgrade/{1-start-distro/1-start-centos_8.stream_container-tools 2-repo_digest/defaut 3-upgrade/simple 4-wait 5-upgrade-ls mon_election/connectivity} 2
pass 7249164 2023-04-23 18:59:56 2023-04-24 12:04:26 2023-04-24 12:36:33 0:32:07 0:21:23 0:10:44 smithi main ubuntu 20.04 orch/cephadm/smoke-roleless/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-services/nfs-ingress2 3-final} 2
pass 7249165 2023-04-23 18:59:57 2023-04-24 12:04:26 2023-04-24 12:28:36 0:24:10 0:15:23 0:08:47 smithi main rhel 8.4 orch/cephadm/osds/{0-distro/rhel_8.4_container_tools_rhel8 0-nvme-loop 1-start 2-ops/rm-zap-wait} 2
pass 7249166 2023-04-23 18:59:58 2023-04-24 12:07:27 2023-04-24 14:27:32 2:20:05 1:45:59 0:34:06 smithi main centos 8.stream orch/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/classic task/test_nfs} 1
pass 7249167 2023-04-23 18:59:59 2023-04-24 12:12:18 2023-04-24 12:53:43 0:41:25 0:21:50 0:19:35 smithi main ubuntu 18.04 orch/cephadm/smoke/{0-nvme-loop distro/ubuntu_18.04 fixed-2 mon_election/connectivity start} 2
pass 7249168 2023-04-23 18:59:59 2023-04-24 12:22:20 2023-04-24 14:18:08 1:55:48 1:44:55 0:10:53 smithi main ubuntu 18.04 orch/cephadm/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/octopus backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{ubuntu_18.04} mon_election/classic msgr-failures/osd-delay rados thrashers/pggrow thrashosds-health workloads/radosbench} 3
dead 7249169 2023-04-23 19:00:00 2023-04-24 12:22:20 2023-04-25 00:33:04 12:10:44 smithi main centos 8.stream orch/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.stream_container_tools 1-bootstrap/16.2.5 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
Failure Reason:

hit max job timeout

pass 7249170 2023-04-23 19:00:01 2023-04-24 12:24:51 2023-04-24 13:41:26 1:16:35 0:45:35 0:31:00 smithi main centos 8.stream orch/cephadm/smoke-roleless/{0-distro/centos_8.stream_container_tools 0-nvme-loop 1-start 2-services/nfs 3-final} 2
pass 7249171 2023-04-23 19:00:02 2023-04-24 12:25:41 2023-04-24 14:48:07 2:22:26 1:53:00 0:29:26 smithi main centos 8.stream orch/cephadm/thrash/{0-distro/centos_8.stream_container_tools 1-start 2-thrash 3-tasks/rados_api_tests fixed-2 msgr/async root} 2
pass 7249172 2023-04-23 19:00:02 2023-04-24 12:27:02 2023-04-24 13:17:12 0:50:10 0:40:16 0:09:54 smithi main ubuntu 18.04 orch/cephadm/with-work/{0-distro/ubuntu_18.04 fixed-2 mode/packaged mon_election/classic msgr/async start tasks/rados_api_tests} 2
pass 7249173 2023-04-23 19:00:03 2023-04-24 12:27:42 2023-04-24 14:37:24 2:09:42 1:39:14 0:30:28 smithi main centos 8.stream orch/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/connectivity task/test_orch_cli} 1
pass 7249174 2023-04-23 19:00:04 2023-04-24 12:27:43 2023-04-24 14:40:58 2:13:15 1:44:21 0:28:54 smithi main centos 8.stream orch/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-inline/yes 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
pass 7249175 2023-04-23 19:00:05 2023-04-24 12:27:53 2023-04-24 13:02:25 0:34:32 0:22:51 0:11:41 smithi main ubuntu 20.04 orch/cephadm/smoke/{0-nvme-loop distro/ubuntu_20.04 fixed-2 mon_election/classic start} 2
pass 7249176 2023-04-23 19:00:05 2023-04-24 12:28:34 2023-04-24 12:53:57 0:25:23 0:18:11 0:07:12 smithi main rhel 8.4 orch/cephadm/smoke-roleless/{0-distro/rhel_8.4_container_tools_3.0 0-nvme-loop 1-start 2-services/nfs2 3-final} 2
pass 7249177 2023-04-23 19:00:06 2023-04-24 12:28:34 2023-04-24 13:13:02 0:44:28 0:34:35 0:09:53 smithi main ubuntu 18.04 orch/cephadm/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/luminous-v1only backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{ubuntu_18.04} mon_election/connectivity msgr-failures/fastclose rados thrashers/pggrow thrashosds-health workloads/rbd_cls} 3
fail 7249178 2023-04-23 19:00:07 2023-04-24 12:29:24 2023-04-24 13:01:54 0:32:30 smithi main centos 8.stream orch/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/classic task/test_orch_cli_mon} 5
Failure Reason:

Command failed on smithi042 with status 1: 'sudo yum install -y kernel'

fail 7249179 2023-04-23 19:00:08 2023-04-24 12:36:06 2023-04-24 12:54:03 0:17:57 smithi main ubuntu 18.04 orch/cephadm/osds/{0-distro/ubuntu_18.04 0-nvme-loop 1-start 2-ops/rmdir-reactivate} 2
Failure Reason:

Cannot connect to remote host smithi077

pass 7249180 2023-04-23 19:00:08 2023-04-24 12:36:16 2023-04-24 13:01:54 0:25:38 0:19:28 0:06:10 smithi main rhel 8.4 orch/cephadm/smoke-roleless/{0-distro/rhel_8.4_container_tools_rhel8 0-nvme-loop 1-start 2-services/rgw-ingress 3-final} 2
fail 7249181 2023-04-23 19:00:09 2023-04-24 12:36:37 2023-04-24 12:56:15 0:19:38 0:07:32 0:12:06 smithi main orch/cephadm/dashboard/{0-distro/ignorelist_health task/test_e2e} 2
Failure Reason:

Failed to fetch package version from https://shaman.ceph.com/api/search/?status=ready&project=ceph&flavor=default&distros=ubuntu%2F22.04%2Fx86_64&sha1=ad9576de6cbdb0047be604610f5b00c42ad65335

pass 7249182 2023-04-23 19:00:10 2023-04-24 12:38:37 2023-04-24 15:00:03 2:21:26 1:52:06 0:29:20 smithi main centos 8.stream orch/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-inline/no 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
pass 7249183 2023-04-23 19:00:11 2023-04-24 12:38:48 2023-04-24 13:59:54 1:21:06 0:49:42 0:31:24 smithi main centos 8.stream orch/cephadm/smoke/{0-nvme-loop distro/centos_8.stream_container_tools fixed-2 mon_election/connectivity start} 2
pass 7249184 2023-04-23 19:00:11 2023-04-24 12:40:08 2023-04-24 13:01:16 0:21:08 0:15:28 0:05:40 smithi main rhel 8.4 orch/cephadm/smoke-singlehost/{0-distro$/{rhel_8.4_container_tools_rhel8} 1-start 2-services/rgw 3-final} 1
pass 7249185 2023-04-23 19:00:12 2023-04-24 12:40:09 2023-04-24 15:25:29 2:45:20 2:13:19 0:32:01 smithi main centos 8.stream orch/cephadm/thrash/{0-distro/centos_8.stream_container_tools 1-start 2-thrash 3-tasks/radosbench fixed-2 msgr/async-v1only root} 2
pass 7249186 2023-04-23 19:00:13 2023-04-24 12:41:29 2023-04-24 13:46:10 1:04:41 0:52:57 0:11:44 smithi main ubuntu 20.04 orch/cephadm/upgrade/{1-start-distro/1-start-ubuntu_20.04-15.2.9 2-repo_digest/repo_digest 3-upgrade/staggered 4-wait 5-upgrade-ls mon_election/classic} 2
pass 7249187 2023-04-23 19:00:14 2023-04-24 12:42:00 2023-04-24 13:24:16 0:42:16 0:30:47 0:11:29 smithi main ubuntu 20.04 orch/cephadm/with-work/{0-distro/ubuntu_20.04 fixed-2 mode/root mon_election/connectivity msgr/async-v1only start tasks/rados_python} 2
pass 7249188 2023-04-23 19:00:14 2023-04-24 12:42:00 2023-04-24 14:42:12 2:00:12 1:30:14 0:29:58 smithi main centos 8.stream orch/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/connectivity task/test_adoption} 1
fail 7249189 2023-04-23 19:00:15 2023-04-24 12:42:51 2023-04-24 13:03:11 0:20:20 smithi main ubuntu 18.04 orch/cephadm/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/luminous backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{ubuntu_18.04} mon_election/classic msgr-failures/few rados thrashers/careful thrashosds-health workloads/snaps-few-objects} 3
Failure Reason:

Cannot connect to remote host smithi090

fail 7249190 2023-04-23 19:00:16 2023-04-24 12:44:21 2023-04-24 13:03:21 0:19:00 smithi main ubuntu 18.04 orch/cephadm/smoke-roleless/{0-distro/ubuntu_18.04 0-nvme-loop 1-start 2-services/rgw 3-final} 2
Failure Reason:

Cannot connect to remote host smithi082

fail 7249191 2023-04-23 19:00:17 2023-04-24 12:45:32 2023-04-24 14:47:34 2:02:02 1:32:21 0:29:41 smithi main centos 8.stream orch/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/classic task/test_cephadm} 1
Failure Reason:

SELinux denials found on ubuntu@smithi186.front.sepia.ceph.com: ['type=AVC msg=audit(1682347534.629:19123): avc: denied { ioctl } for pid=157194 comm="iptables" path="/var/lib/containers/storage/overlay/4c212827ef372227f529bf8d5f94a5b1efea895f7e2d70058194b8f9b86da923/merged" dev="overlay" ino=3412357 scontext=system_u:system_r:iptables_t:s0 tcontext=system_u:object_r:container_file_t:s0:c1022,c1023 tclass=dir permissive=1']

pass 7249192 2023-04-23 19:00:17 2023-04-24 12:46:12 2023-04-24 13:11:43 0:25:31 0:16:20 0:09:11 smithi main ubuntu 18.04 orch/cephadm/osds/{0-distro/ubuntu_18.04 0-nvme-loop 1-start 2-ops/repave-all} 2
pass 7249193 2023-04-23 19:00:18 2023-04-24 12:46:13 2023-04-24 13:13:31 0:27:18 0:18:48 0:08:30 smithi main rhel 8.4 orch/cephadm/smoke/{0-nvme-loop distro/rhel_8.4_container_tools_3.0 fixed-2 mon_election/classic start} 2
pass 7249194 2023-04-23 19:00:19 2023-04-24 12:47:53 2023-04-24 13:17:17 0:29:24 0:17:37 0:11:47 smithi main ubuntu 20.04 orch/cephadm/smoke-roleless/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-services/basic 3-final} 2
dead 7249195 2023-04-23 19:00:20 2023-04-24 12:49:24 2023-04-25 01:03:47 12:14:23 smithi main centos 8.stream orch/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-inline/yes 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

hit max job timeout

fail 7249196 2023-04-23 19:00:20 2023-04-24 12:53:45 2023-04-24 13:12:15 0:18:30 smithi main ubuntu 18.04 orch/cephadm/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/mimic-v1only backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{ubuntu_18.04} mon_election/connectivity msgr-failures/osd-delay rados thrashers/default thrashosds-health workloads/test_rbd_api} 3
Failure Reason:

Cannot connect to remote host smithi133

pass 7249197 2023-04-23 19:00:21 2023-04-24 12:54:05 2023-04-24 14:03:35 1:09:30 0:40:48 0:28:42 smithi main centos 8.stream orch/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/connectivity task/test_cephadm_repos} 1
fail 7249198 2023-04-23 19:00:22 2023-04-24 12:54:06 2023-04-24 13:21:16 0:27:10 smithi main centos 8.stream orch/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.stream_container_tools 1-bootstrap/octopus 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
Failure Reason:

Command failed on smithi144 with status 1: 'sudo yum install -y kernel'

pass 7249199 2023-04-23 19:00:23 2023-04-24 12:56:16 2023-04-24 14:13:12 1:16:56 0:45:52 0:31:04 smithi main centos 8.stream orch/cephadm/smoke-roleless/{0-distro/centos_8.stream_container_tools 0-nvme-loop 1-start 2-services/client-keyring 3-final} 2
pass 7249200 2023-04-23 19:00:23 2023-04-24 12:57:27 2023-04-24 15:18:32 2:21:05 1:47:33 0:33:32 smithi main centos 8.stream orch/cephadm/thrash/{0-distro/centos_8.stream_container_tools 1-start 2-thrash 3-tasks/small-objects fixed-2 msgr/async-v2only root} 2
dead 7249201 2023-04-23 19:00:24 2023-04-24 13:01:18 2023-04-24 14:08:45 1:07:27 0:36:55 0:30:32 smithi main centos 8.stream orch/cephadm/with-work/{0-distro/centos_8.stream_container_tools fixed-2 mode/packaged mon_election/classic msgr/async-v2only start tasks/rados_api_tests} 2
Failure Reason:

{'smithi003.front.sepia.ceph.com': {'_ansible_no_log': False, 'changed': False, 'invocation': {'module_args': {'allow_downgrade': False, 'allowerasing': False, 'autoremove': False, 'bugfix': False, 'conf_file': None, 'disable_excludes': None, 'disable_gpg_check': False, 'disable_plugin': [], 'disablerepo': [], 'download_dir': None, 'download_only': False, 'enable_plugin': [], 'enablerepo': [], 'exclude': [], 'install_repoquery': True, 'install_weak_deps': True, 'installroot': '/', 'list': None, 'lock_timeout': 30, 'name': ['krb5-workstation'], 'releasever': None, 'security': False, 'skip_broken': False, 'state': 'present', 'update_cache': False, 'update_only': False, 'validate_certs': True}}, 'msg': "Failed to download metadata for repo 'CentOS-PowerTools': Yum repo downloading error: Downloading error(s): repodata/38c2c78c3f89e4d347f07263805d33d8ae17df60ebab2e8a259218c57bcae2fb-comps-PowerTools.x86_64.xml - Cannot download, all mirrors were already tried without success; repodata/655edd281b923f12af44ba71f23c34741aa8dae0a29e712e75d96be8787f7115-modules.yaml.xz - Cannot download, all mirrors were already tried without success", 'rc': 1, 'results': []}}

pass 7249202 2023-04-23 19:00:25 2023-04-24 13:01:58 2023-04-24 13:27:31 0:25:33 0:18:26 0:07:07 smithi main rhel 8.4 orch/cephadm/smoke/{0-nvme-loop distro/rhel_8.4_container_tools_rhel8 fixed-2 mon_election/connectivity start} 2
dead 7249203 2023-04-23 19:00:25 2023-04-24 13:01:59 2023-04-24 13:53:02 0:51:03 0:21:27 0:29:36 smithi main centos 8.stream orch/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/classic task/test_iscsi_pids_limit} 1
Failure Reason:

{'smithi136.front.sepia.ceph.com': {'_ansible_no_log': False, 'changed': False, 'invocation': {'module_args': {'allow_downgrade': False, 'allowerasing': False, 'autoremove': False, 'bugfix': False, 'conf_file': None, 'disable_excludes': None, 'disable_gpg_check': False, 'disable_plugin': [], 'disablerepo': [], 'download_dir': None, 'download_only': False, 'enable_plugin': [], 'enablerepo': [], 'exclude': [], 'install_repoquery': True, 'install_weak_deps': True, 'installroot': '/', 'list': None, 'lock_timeout': 30, 'name': ['krb5-workstation'], 'releasever': None, 'security': False, 'skip_broken': False, 'state': 'present', 'update_cache': False, 'update_only': False, 'validate_certs': True}}, 'msg': "Failed to download metadata for repo 'CentOS-PowerTools': Yum repo downloading error: Downloading error(s): repodata/e830a7a4e881ef24680d161802ae07874dd447031dd12d47f2d3d4a911245522-primary.xml.gz - Cannot download, all mirrors were already tried without success; repodata/144bb6d03f4cceafda5d5248f92ece7a8539e8858d490dfec5ebeb61d487bb20-filelists.xml.gz - Cannot download, all mirrors were already tried without success; repodata/38c2c78c3f89e4d347f07263805d33d8ae17df60ebab2e8a259218c57bcae2fb-comps-PowerTools.x86_64.xml - Cannot download, all mirrors were already tried without success", 'rc': 1, 'results': []}}

fail 7249204 2023-04-23 19:00:26 2023-04-24 13:01:59 2023-04-24 13:20:36 0:18:37 smithi main ubuntu 18.04 orch/cephadm/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/mimic backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{ubuntu_18.04} mon_election/classic msgr-failures/fastclose rados thrashers/mapgap thrashosds-health workloads/cache-snaps} 3
Failure Reason:

Cannot connect to remote host smithi125

fail 7249205 2023-04-23 19:00:27 2023-04-24 13:02:29 2023-04-24 13:22:24 0:19:55 0:13:17 0:06:38 smithi main rhel 8.4 orch/cephadm/smoke-roleless/{0-distro/rhel_8.4_container_tools_3.0 0-nvme-loop 1-start 2-services/iscsi 3-final} 2
Failure Reason:

Command failed on smithi100 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:ad9576de6cbdb0047be604610f5b00c42ad65335 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 409adda0-e2a2-11ed-9b00-001a4aab830c -- ceph osd stat -f json'

dead 7249206 2023-04-23 19:00:28 2023-04-24 13:03:20 2023-04-24 14:09:09 1:05:49 0:36:26 0:29:23 smithi main centos 8.stream orch/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-inline/no 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

{'smithi090.front.sepia.ceph.com': {'_ansible_no_log': False, 'changed': False, 'invocation': {'module_args': {'allow_downgrade': False, 'allowerasing': False, 'autoremove': False, 'bugfix': False, 'conf_file': None, 'disable_excludes': None, 'disable_gpg_check': False, 'disable_plugin': [], 'disablerepo': [], 'download_dir': None, 'download_only': False, 'enable_plugin': [], 'enablerepo': [], 'exclude': [], 'install_repoquery': True, 'install_weak_deps': True, 'installroot': '/', 'list': None, 'lock_timeout': 30, 'name': ['krb5-workstation'], 'releasever': None, 'security': False, 'skip_broken': False, 'state': 'present', 'update_cache': False, 'update_only': False, 'validate_certs': True}}, 'msg': "Failed to download metadata for repo 'CentOS-PowerTools': Yum repo downloading error: Downloading error(s): repodata/38c2c78c3f89e4d347f07263805d33d8ae17df60ebab2e8a259218c57bcae2fb-comps-PowerTools.x86_64.xml - Cannot download, all mirrors were already tried without success", 'rc': 1, 'results': []}}

pass 7249207 2023-04-23 19:00:28 2023-04-24 13:03:20 2023-04-24 13:45:20 0:42:00 0:31:40 0:10:20 smithi main ubuntu 20.04 orch/cephadm/upgrade/{1-start-distro/1-start-ubuntu_20.04 2-repo_digest/defaut 3-upgrade/simple 4-wait 5-upgrade-ls mon_election/connectivity} 2
pass 7249208 2023-04-23 19:00:29 2023-04-24 13:03:31 2023-04-24 13:33:28 0:29:57 0:18:17 0:11:40 smithi main ubuntu 20.04 orch/cephadm/osds/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-ops/rm-zap-add} 2
pass 7249209 2023-04-23 19:00:30 2023-04-24 13:05:31 2023-04-24 15:19:42 2:14:11 1:41:51 0:32:20 smithi main centos 8.stream orch/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/connectivity task/test_nfs} 1
pass 7249210 2023-04-23 19:00:31 2023-04-24 13:08:42 2023-04-24 13:40:19 0:31:37 0:21:33 0:10:04 smithi main ubuntu 18.04 orch/cephadm/smoke/{0-nvme-loop distro/ubuntu_18.04 fixed-2 mon_election/classic start} 2
fail 7249211 2023-04-23 19:00:31 2023-04-24 13:08:42 2023-04-24 13:33:07 0:24:25 0:13:46 0:10:39 smithi main rhel 8.4 orch/cephadm/smoke-roleless/{0-distro/rhel_8.4_container_tools_rhel8 0-nvme-loop 1-start 2-services/mirror 3-final} 2
Failure Reason:

Command failed on smithi032 with status 127: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:ad9576de6cbdb0047be604610f5b00c42ad65335 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 72aa16e8-e2a3-11ed-9b00-001a4aab830c -- ceph osd stat -f json'

fail 7249212 2023-04-23 19:00:32 2023-04-24 13:11:53 2023-04-24 13:27:56 0:16:03 0:06:17 0:09:46 smithi main ubuntu 20.04 orch/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 1-rook 2-workload/radosbench 3-final cluster/1-node k8s/1.21 net/calico rook/master} 1
Failure Reason:

Command failed on smithi077 with status 1: 'sudo systemctl enable --now kubelet && sudo kubeadm config images pull'

pass 7249213 2023-04-23 19:00:33 2023-04-24 13:12:23 2023-04-24 15:12:43 2:00:20 1:50:17 0:10:03 smithi main ubuntu 18.04 orch/cephadm/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/nautilus-v1only backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{ubuntu_18.04} mon_election/connectivity msgr-failures/few rados thrashers/morepggrow thrashosds-health workloads/radosbench} 3
pass 7249214 2023-04-23 19:00:34 2023-04-24 13:13:04 2023-04-24 15:42:20 2:29:16 1:58:46 0:30:30 smithi main centos 8.stream orch/cephadm/thrash/{0-distro/centos_8.stream_container_tools 1-start 2-thrash 3-tasks/snaps-few-objects fixed-2 msgr/async root} 2
pass 7249215 2023-04-23 19:00:34 2023-04-24 13:13:04 2023-04-24 13:51:05 0:38:01 0:30:56 0:07:05 smithi main rhel 8.4 orch/cephadm/with-work/{0-distro/rhel_8.4_container_tools_3.0 fixed-2 mode/root mon_election/connectivity msgr/async start tasks/rados_python} 2
pass 7249216 2023-04-23 19:00:35 2023-04-24 13:13:35 2023-04-24 15:24:28 2:10:53 1:37:03 0:33:50 smithi main centos 8.stream orch/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/classic task/test_orch_cli} 1
fail 7249217 2023-04-23 19:00:36 2023-04-24 13:17:16 2023-04-24 13:35:14 0:17:58 smithi main ubuntu 18.04 orch/cephadm/smoke-roleless/{0-distro/ubuntu_18.04 0-nvme-loop 1-start 2-services/nfs-ingress-rgw-bucket 3-final} 2
Failure Reason:

Cannot connect to remote host smithi188

dead 7249218 2023-04-23 19:00:37 2023-04-24 13:17:26 2023-04-25 01:29:03 12:11:37 smithi main centos 8.stream orch/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-inline/yes 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

hit max job timeout

pass 7249219 2023-04-23 19:00:37 2023-04-24 13:19:37 2023-04-24 13:54:29 0:34:52 0:23:16 0:11:36 smithi main ubuntu 20.04 orch/cephadm/smoke/{0-nvme-loop distro/ubuntu_20.04 fixed-2 mon_election/connectivity start} 2
pass 7249220 2023-04-23 19:00:38 2023-04-24 13:20:37 2023-04-24 14:37:51 1:17:14 0:46:51 0:30:23 smithi main centos 8.stream orch/cephadm/osds/{0-distro/centos_8.stream_container_tools 0-nvme-loop 1-start 2-ops/rm-zap-flag} 2
fail 7249221 2023-04-23 19:00:39 2023-04-24 13:20:38 2023-04-24 13:40:13 0:19:35 smithi main ubuntu 18.04 orch/cephadm/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/nautilus-v2only backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{ubuntu_18.04} mon_election/classic msgr-failures/osd-delay rados thrashers/none thrashosds-health workloads/rbd_cls} 3
Failure Reason:

Cannot connect to remote host smithi100

fail 7249222 2023-04-23 19:00:40 2023-04-24 13:22:28 2023-04-24 13:55:28 0:33:00 smithi main centos 8.stream orch/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/connectivity task/test_orch_cli_mon} 5
Failure Reason:

Command failed on smithi117 with status 1: 'sudo yum install -y kernel'

pass 7249223 2023-04-23 19:00:40 2023-04-24 13:27:39 2023-04-24 14:04:47 0:37:08 0:22:49 0:14:19 smithi main ubuntu 20.04 orch/cephadm/smoke-roleless/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-services/nfs-ingress-rgw-user 3-final} 2
fail 7249224 2023-04-23 19:00:41 2023-04-24 13:29:10 2023-04-24 15:58:42 2:29:32 1:57:15 0:32:17 smithi main centos 8.stream orch/cephadm/dashboard/{0-distro/centos_8.stream_container_tools task/test_e2e} 2
Failure Reason:

Command failed (workunit test cephadm/test_dashboard_e2e.sh) on smithi093 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=ad9576de6cbdb0047be604610f5b00c42ad65335 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_dashboard_e2e.sh'

pass 7249225 2023-04-23 19:00:42 2023-04-24 13:31:31 2023-04-24 15:52:33 2:21:02 1:51:53 0:29:09 smithi main centos 8.stream orch/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-inline/no 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
dead 7249226 2023-04-23 19:00:43 2023-04-24 13:31:31 2023-04-24 13:37:36 0:06:05 smithi main centos 8.stream orch/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.stream_container_tools 1-bootstrap/16.2.4 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
Failure Reason:

Error reimaging machines: 'NoneType' object has no attribute '_fields'

pass 7249227 2023-04-23 19:00:43 2023-04-24 13:33:12 2023-04-24 13:57:16 0:24:04 0:17:04 0:07:00 smithi main rhel 8.4 orch/cephadm/orchestrator_cli/{0-random-distro$/{rhel_8.4_container_tools_3.0} 2-node-mgr orchestrator_cli} 2
pass 7249228 2023-04-23 19:00:44 2023-04-24 13:33:32 2023-04-24 16:51:00 3:17:28 2:46:06 0:31:22 smithi main centos 8.stream orch/cephadm/rbd_iscsi/{base/install cluster/{fixed-3 openstack} pool/datapool supported-random-distro$/{centos_8} workloads/ceph_iscsi} 3
pass 7249229 2023-04-23 19:00:45 2023-04-24 13:35:23 2023-04-24 14:54:35 1:19:12 0:47:49 0:31:23 smithi main centos 8.stream orch/cephadm/smoke/{0-nvme-loop distro/centos_8.stream_container_tools fixed-2 mon_election/classic start} 2
pass 7249230 2023-04-23 19:00:46 2023-04-24 13:36:23 2023-04-24 14:55:03 1:18:40 0:47:54 0:30:46 smithi main centos 8.stream orch/cephadm/smoke-roleless/{0-distro/centos_8.stream_container_tools 0-nvme-loop 1-start 2-services/nfs-ingress 3-final} 2
pass 7249231 2023-04-23 19:00:46 2023-04-24 13:37:54 2023-04-24 15:01:15 1:23:21 0:51:24 0:31:57 smithi main centos 8.stream orch/cephadm/smoke-singlehost/{0-distro$/{centos_8.stream_container_tools} 1-start 2-services/basic 3-final} 1
pass 7249232 2023-04-23 19:00:47 2023-04-24 13:40:14 2023-04-24 16:07:32 2:27:18 1:57:07 0:30:11 smithi main centos 8.stream orch/cephadm/thrash/{0-distro/centos_8.stream_container_tools 1-start 2-thrash 3-tasks/rados_api_tests fixed-2 msgr/async-v1only root} 2
pass 7249233 2023-04-23 19:00:48 2023-04-24 13:40:15 2023-04-24 15:10:00 1:29:45 0:59:44 0:30:01 smithi main centos 8.stream orch/cephadm/upgrade/{1-start-distro/1-start-centos_8.stream_container-tools 2-repo_digest/defaut 3-upgrade/simple 4-wait 5-upgrade-ls mon_election/classic} 2
pass 7249234 2023-04-23 19:00:49 2023-04-24 13:40:25 2023-04-24 14:27:27 0:47:02 0:39:39 0:07:23 smithi main rhel 8.4 orch/cephadm/with-work/{0-distro/rhel_8.4_container_tools_rhel8 fixed-2 mode/packaged mon_election/classic msgr/async-v1only start tasks/rados_api_tests} 2
pass 7249235 2023-04-23 19:00:49 2023-04-24 13:41:36 2023-04-24 15:45:32 2:03:56 1:34:25 0:29:31 smithi main centos 8.stream orch/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/classic task/test_adoption} 1
pass 7249236 2023-04-23 19:00:50 2023-04-24 13:42:16 2023-04-24 14:48:03 1:05:47 0:51:28 0:14:19 smithi main ubuntu 18.04 orch/cephadm/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/nautilus backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{ubuntu_18.04} mon_election/connectivity msgr-failures/fastclose rados thrashers/pggrow thrashosds-health workloads/snaps-few-objects} 3
pass 7249237 2023-04-23 19:00:51 2023-04-24 13:46:17 2023-04-24 14:12:22 0:26:05 0:18:27 0:07:38 smithi main rhel 8.4 orch/cephadm/smoke-roleless/{0-distro/rhel_8.4_container_tools_3.0 0-nvme-loop 1-start 2-services/nfs-ingress2 3-final} 2
fail 7249238 2023-04-23 19:00:52 2023-04-24 13:47:08 2023-04-24 15:54:15 2:07:07 1:36:48 0:30:19 smithi main centos 8.stream orch/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/connectivity task/test_cephadm} 1
Failure Reason:

SELinux denials found on ubuntu@smithi195.front.sepia.ceph.com: ['type=AVC msg=audit(1682351436.932:19138): avc: denied { ioctl } for pid=157823 comm="iptables" path="/var/lib/containers/storage/overlay/fd079f01066f3f2a589ea8107a556d7e1a5f7a2b60b312107a29eddf19ea023b/merged" dev="overlay" ino=3412331 scontext=system_u:system_r:iptables_t:s0 tcontext=system_u:object_r:container_file_t:s0:c1022,c1023 tclass=dir permissive=1']

pass 7249239 2023-04-23 19:00:52 2023-04-24 13:47:08 2023-04-24 14:14:41 0:27:33 0:16:22 0:11:11 smithi main rhel 8.4 orch/cephadm/osds/{0-distro/rhel_8.4_container_tools_3.0 0-nvme-loop 1-start 2-ops/rm-zap-wait} 2
pass 7249240 2023-04-23 19:00:53 2023-04-24 13:51:09 2023-04-24 14:18:35 0:27:26 0:18:07 0:09:19 smithi main rhel 8.4 orch/cephadm/smoke/{0-nvme-loop distro/rhel_8.4_container_tools_3.0 fixed-2 mon_election/connectivity start} 2
fail 7249241 2023-04-23 19:00:54 2023-04-24 13:53:30 2023-04-24 14:12:23 0:18:53 smithi main ubuntu 18.04 orch/cephadm/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/octopus backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{ubuntu_18.04} mon_election/classic msgr-failures/few rados thrashers/careful thrashosds-health workloads/test_rbd_api} 3
Failure Reason:

Cannot connect to remote host smithi136

pass 7249242 2023-04-23 19:00:54 2023-04-24 13:54:30 2023-04-24 16:19:15 2:24:45 1:53:07 0:31:38 smithi main centos 8.stream orch/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-inline/yes 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
pass 7249243 2023-04-23 19:00:55 2023-04-24 13:55:31 2023-04-24 14:18:39 0:23:08 0:16:27 0:06:41 smithi main rhel 8.4 orch/cephadm/smoke-roleless/{0-distro/rhel_8.4_container_tools_rhel8 0-nvme-loop 1-start 2-services/nfs 3-final} 2
pass 7249244 2023-04-23 19:00:56 2023-04-24 13:55:31 2023-04-24 15:04:30 1:08:59 0:40:19 0:28:40 smithi main centos 8.stream orch/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/classic task/test_cephadm_repos} 1
pass 7249245 2023-04-23 19:00:57 2023-04-24 13:55:31 2023-04-24 16:36:42 2:41:11 2:10:26 0:30:45 smithi main centos 8.stream orch/cephadm/thrash/{0-distro/centos_8.stream_container_tools 1-start 2-thrash 3-tasks/radosbench fixed-2 msgr/async-v2only root} 2
fail 7249246 2023-04-23 19:00:57 2023-04-24 13:57:22 2023-04-24 14:18:39 0:21:17 smithi main ubuntu 18.04 orch/cephadm/with-work/{0-distro/ubuntu_18.04 fixed-2 mode/root mon_election/connectivity msgr/async-v2only start tasks/rados_python} 2
Failure Reason:

Cannot connect to remote host smithi101

pass 7249247 2023-04-23 19:00:58 2023-04-24 14:00:03 2023-04-24 14:28:58 0:28:55 0:17:58 0:10:57 smithi main rhel 8.4 orch/cephadm/smoke/{0-nvme-loop distro/rhel_8.4_container_tools_rhel8 fixed-2 mon_election/classic start} 2
fail 7249248 2023-04-23 19:00:59 2023-04-24 14:03:44 2023-04-24 14:23:39 0:19:55 smithi main ubuntu 18.04 orch/cephadm/smoke-roleless/{0-distro/ubuntu_18.04 0-nvme-loop 1-start 2-services/nfs2 3-final} 2
Failure Reason:

Cannot connect to remote host smithi077

pass 7249249 2023-04-23 19:01:00 2023-04-24 14:04:54 2023-04-24 16:09:11 2:04:17 1:35:16 0:29:01 smithi main centos 8.stream orch/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/connectivity task/test_iscsi_pids_limit} 1
fail 7249250 2023-04-23 19:01:00 2023-04-24 14:06:05 2023-04-24 14:26:22 0:20:17 smithi main ubuntu 18.04 orch/cephadm/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/luminous-v1only backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{ubuntu_18.04} mon_election/connectivity msgr-failures/osd-delay rados thrashers/default thrashosds-health workloads/cache-snaps} 3
Failure Reason:

Cannot connect to remote host smithi003

pass 7249251 2023-04-23 19:01:01 2023-04-24 14:08:55 2023-04-24 14:35:05 0:26:10 0:17:42 0:08:28 smithi main rhel 8.4 orch/cephadm/osds/{0-distro/rhel_8.4_container_tools_rhel8 0-nvme-loop 1-start 2-ops/rmdir-reactivate} 2
pass 7249252 2023-04-23 19:01:02 2023-04-24 14:09:16 2023-04-24 16:28:07 2:18:51 1:46:56 0:31:55 smithi main centos 8.stream orch/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-inline/no 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
pass 7249253 2023-04-23 19:01:03 2023-04-24 14:12:27 2023-04-24 15:16:21 1:03:54 0:53:02 0:10:52 smithi main ubuntu 20.04 orch/cephadm/upgrade/{1-start-distro/1-start-ubuntu_20.04-15.2.9 2-repo_digest/repo_digest 3-upgrade/staggered 4-wait 5-upgrade-ls mon_election/connectivity} 2
dead 7249254 2023-04-23 19:01:03 2023-04-24 14:12:27 2023-04-25 02:22:41 12:10:14 smithi main ubuntu 20.04 orch/cephadm/smoke-roleless/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-services/rgw-ingress 3-final} 2
Failure Reason:

hit max job timeout

pass 7249255 2023-04-23 19:01:04 2023-04-24 14:12:27 2023-04-24 16:24:43 2:12:16 1:41:59 0:30:17 smithi main centos 8.stream orch/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/classic task/test_nfs} 1
fail 7249256 2023-04-23 19:01:05 2023-04-24 14:13:18 2023-04-24 14:32:52 0:19:34 smithi main ubuntu 18.04 orch/cephadm/smoke/{0-nvme-loop distro/ubuntu_18.04 fixed-2 mon_election/connectivity start} 2
Failure Reason:

Cannot connect to remote host smithi066

fail 7249257 2023-04-23 19:01:06 2023-04-24 14:14:18 2023-04-24 14:33:04 0:18:46 smithi main ubuntu 18.04 orch/cephadm/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/luminous backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{ubuntu_18.04} mon_election/classic msgr-failures/fastclose rados thrashers/mapgap thrashosds-health workloads/radosbench} 3
Failure Reason:

Cannot connect to remote host smithi178

pass 7249258 2023-04-23 19:01:06 2023-04-24 14:14:49 2023-04-24 15:47:53 1:33:04 0:59:21 0:33:43 smithi main centos 8.stream orch/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.stream_container_tools 1-bootstrap/16.2.5 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
pass 7249259 2023-04-23 19:01:07 2023-04-24 14:18:10 2023-04-24 15:36:23 1:18:13 0:47:26 0:30:47 smithi main centos 8.stream orch/cephadm/smoke-roleless/{0-distro/centos_8.stream_container_tools 0-nvme-loop 1-start 2-services/rgw 3-final} 2
pass 7249260 2023-04-23 19:01:08 2023-04-24 14:18:40 2023-04-24 16:40:04 2:21:24 1:51:57 0:29:27 smithi main centos 8.stream orch/cephadm/thrash/{0-distro/centos_8.stream_container_tools 1-start 2-thrash 3-tasks/small-objects fixed-2 msgr/async root} 2
pass 7249261 2023-04-23 19:01:09 2023-04-24 14:18:40 2023-04-24 15:11:01 0:52:21 0:41:58 0:10:23 smithi main ubuntu 20.04 orch/cephadm/with-work/{0-distro/ubuntu_20.04 fixed-2 mode/packaged mon_election/classic msgr/async start tasks/rados_api_tests} 2
pass 7249262 2023-04-23 19:01:09 2023-04-24 14:18:41 2023-04-24 16:21:53 2:03:12 1:33:07 0:30:05 smithi main centos 8.stream orch/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/connectivity task/test_orch_cli} 1
dead 7249263 2023-04-23 19:01:10 2023-04-24 14:18:41 2023-04-25 02:33:56 12:15:15 smithi main centos 8.stream orch/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-inline/yes 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

hit max job timeout

pass 7249264 2023-04-23 19:01:11 2023-04-24 14:23:42 2023-04-24 14:49:19 0:25:37 0:16:20 0:09:17 smithi main rhel 8.4 orch/cephadm/osds/{0-distro/rhel_8.4_container_tools_rhel8 0-nvme-loop 1-start 2-ops/repave-all} 2
pass 7249265 2023-04-23 19:01:12 2023-04-24 14:26:33 2023-04-24 15:02:16 0:35:43 0:23:02 0:12:41 smithi main ubuntu 20.04 orch/cephadm/smoke/{0-nvme-loop distro/ubuntu_20.04 fixed-2 mon_election/classic start} 2
pass 7249266 2023-04-23 19:01:12 2023-04-24 14:27:33 2023-04-24 14:55:17 0:27:44 0:20:24 0:07:20 smithi main rhel 8.4 orch/cephadm/smoke-roleless/{0-distro/rhel_8.4_container_tools_3.0 0-nvme-loop 1-start 2-services/basic 3-final} 2
fail 7249267 2023-04-23 19:01:13 2023-04-24 14:27:34 2023-04-24 14:51:10 0:23:36 smithi main ubuntu 18.04 orch/cephadm/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/mimic-v1only backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{ubuntu_18.04} mon_election/connectivity msgr-failures/few rados thrashers/morepggrow thrashosds-health workloads/rbd_cls} 3
Failure Reason:

Cannot connect to remote host smithi141

fail 7249268 2023-04-23 19:01:14 2023-04-24 14:32:55 2023-04-24 14:48:19 0:15:24 0:06:07 0:09:17 smithi main ubuntu 18.04 orch/rook/smoke/{0-distro/ubuntu_18.04 0-kubeadm 1-rook 2-workload/radosbench 3-final cluster/1-node k8s/1.21 net/calico rook/1.6.2} 1
Failure Reason:

Command failed on smithi057 with status 1: 'sudo systemctl enable --now kubelet && sudo kubeadm config images pull'

fail 7249269 2023-04-23 19:01:14 2023-04-24 14:32:55 2023-04-24 15:01:03 0:28:08 smithi main centos 8.stream orch/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/classic task/test_orch_cli_mon} 5
Failure Reason:

Command failed on smithi171 with status 1: 'sudo yum install -y kernel'

pass 7249270 2023-04-23 19:01:15 2023-04-24 14:35:06 2023-04-24 15:03:34 0:28:28 0:19:33 0:08:55 smithi main rhel 8.4 orch/cephadm/smoke-roleless/{0-distro/rhel_8.4_container_tools_rhel8 0-nvme-loop 1-start 2-services/client-keyring 3-final} 2
fail 7249271 2023-04-23 19:01:16 2023-04-24 14:37:57 2023-04-24 14:58:33 0:20:36 0:07:43 0:12:53 smithi main orch/cephadm/dashboard/{0-distro/ignorelist_health task/test_e2e} 2
Failure Reason:

Failed to fetch package version from https://shaman.ceph.com/api/search/?status=ready&project=ceph&flavor=default&distros=ubuntu%2F22.04%2Fx86_64&sha1=ad9576de6cbdb0047be604610f5b00c42ad65335

dead 7249272 2023-04-23 19:01:17 2023-04-24 14:41:08 2023-04-25 02:50:57 12:09:49 smithi main centos 8.stream orch/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-inline/no 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

hit max job timeout

pass 7249273 2023-04-23 19:01:18 2023-04-24 14:42:18 2023-04-24 16:09:22 1:27:04 0:51:43 0:35:21 smithi main centos 8.stream orch/cephadm/smoke/{0-nvme-loop distro/centos_8.stream_container_tools fixed-2 mon_election/connectivity start} 2
pass 7249274 2023-04-23 19:01:18 2023-04-24 14:48:09 2023-04-24 15:11:41 0:23:32 0:16:58 0:06:34 smithi main rhel 8.4 orch/cephadm/smoke-singlehost/{0-distro$/{rhel_8.4_container_tools_rhel8} 1-start 2-services/rgw 3-final} 1
pass 7249275 2023-04-23 19:01:19 2023-04-24 14:48:10 2023-04-24 17:17:41 2:29:31 1:58:57 0:30:34 smithi main centos 8.stream orch/cephadm/thrash/{0-distro/centos_8.stream_container_tools 1-start 2-thrash 3-tasks/snaps-few-objects fixed-2 msgr/async-v1only root} 2
pass 7249276 2023-04-23 19:01:20 2023-04-24 14:48:10 2023-04-24 15:30:30 0:42:20 0:31:29 0:10:51 smithi main ubuntu 20.04 orch/cephadm/upgrade/{1-start-distro/1-start-ubuntu_20.04 2-repo_digest/defaut 3-upgrade/simple 4-wait 5-upgrade-ls mon_election/classic} 2
pass 7249277 2023-04-23 19:01:21 2023-04-24 14:48:20 2023-04-24 17:02:22 2:14:02 1:44:33 0:29:29 smithi main centos 8.stream orch/cephadm/with-work/{0-distro/centos_8.stream_container_tools fixed-2 mode/root mon_election/connectivity msgr/async-v1only start tasks/rados_python} 2
pass 7249278 2023-04-23 19:01:21 2023-04-24 14:49:21 2023-04-24 16:48:09 1:58:48 1:27:53 0:30:55 smithi main centos 8.stream orch/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/connectivity task/test_adoption} 1
fail 7249279 2023-04-23 19:01:22 2023-04-24 14:51:11 2023-04-24 15:13:46 0:22:35 smithi main ubuntu 18.04 orch/cephadm/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/mimic backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{ubuntu_18.04} mon_election/classic msgr-failures/osd-delay rados thrashers/none thrashosds-health workloads/snaps-few-objects} 3
Failure Reason:

Cannot connect to remote host smithi177

fail 7249280 2023-04-23 19:01:23 2023-04-24 14:54:42 2023-04-24 15:13:09 0:18:27 smithi main ubuntu 18.04 orch/cephadm/smoke-roleless/{0-distro/ubuntu_18.04 0-nvme-loop 1-start 2-services/iscsi 3-final} 2
Failure Reason:

Cannot connect to remote host smithi063

fail 7249281 2023-04-23 19:01:24 2023-04-24 14:55:13 2023-04-24 15:14:02 0:18:49 smithi main ubuntu 18.04 orch/cephadm/osds/{0-distro/ubuntu_18.04 0-nvme-loop 1-start 2-ops/rm-zap-add} 2
Failure Reason:

Cannot connect to remote host smithi039

fail 7249282 2023-04-23 19:01:24 2023-04-24 14:55:23 2023-04-24 17:02:27 2:07:04 1:37:02 0:30:02 smithi main centos 8.stream orch/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/classic task/test_cephadm} 1
Failure Reason:

SELinux denials found on ubuntu@smithi005.front.sepia.ceph.com: ['type=AVC msg=audit(1682355578.153:19148): avc: denied { ioctl } for pid=157040 comm="iptables" path="/var/lib/containers/storage/overlay/0eea661f79c0edc1a57f719ec97d86a96be8d6037291ce5807d36ebd1ddbb4e5/merged" dev="overlay" ino=3412398 scontext=system_u:system_r:iptables_t:s0 tcontext=system_u:object_r:container_file_t:s0:c1022,c1023 tclass=dir permissive=1']

pass 7249283 2023-04-23 19:01:25 2023-04-24 14:55:23 2023-04-24 15:27:17 0:31:54 0:23:05 0:08:49 smithi main rhel 8.4 orch/cephadm/smoke/{0-nvme-loop distro/rhel_8.4_container_tools_3.0 fixed-2 mon_election/classic start} 2
pass 7249284 2023-04-23 19:01:26 2023-04-24 14:58:34 2023-04-24 15:28:07 0:29:33 0:18:06 0:11:27 smithi main ubuntu 20.04 orch/cephadm/smoke-roleless/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-services/mirror 3-final} 2
fail 7249285 2023-04-23 19:01:27 2023-04-24 15:00:05 2023-04-24 15:20:24 0:20:19 smithi main ubuntu 18.04 orch/cephadm/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/nautilus-v1only backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{ubuntu_18.04} mon_election/connectivity msgr-failures/fastclose rados thrashers/pggrow thrashosds-health workloads/test_rbd_api} 3
Failure Reason:

Cannot connect to remote host smithi012

pass 7249286 2023-04-23 19:01:28 2023-04-24 15:01:05 2023-04-24 17:22:26 2:21:21 1:50:51 0:30:30 smithi main centos 8.stream orch/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-inline/yes 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
pass 7249287 2023-04-23 19:01:28 2023-04-24 15:01:06 2023-04-24 16:10:14 1:09:08 0:40:10 0:28:58 smithi main centos 8.stream orch/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/connectivity task/test_cephadm_repos} 1
dead 7249288 2023-04-23 19:01:29 2023-04-24 15:01:16 2023-04-25 03:12:24 12:11:08 smithi main centos 8.stream orch/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.stream_container_tools 1-bootstrap/octopus 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
Failure Reason:

hit max job timeout

pass 7249289 2023-04-23 19:01:30 2023-04-24 15:02:17 2023-04-24 16:27:20 1:25:03 0:53:08 0:31:55 smithi main centos 8.stream orch/cephadm/smoke-roleless/{0-distro/centos_8.stream_container_tools 0-nvme-loop 1-start 2-services/nfs-ingress-rgw-bucket 3-final} 2
pass 7249290 2023-04-23 19:01:31 2023-04-24 15:03:37 2023-04-24 17:29:47 2:26:10 1:56:49 0:29:21 smithi main centos 8.stream orch/cephadm/thrash/{0-distro/centos_8.stream_container_tools 1-start 2-thrash 3-tasks/rados_api_tests fixed-2 msgr/async-v2only root} 2
pass 7249291 2023-04-23 19:01:31 2023-04-24 15:04:38 2023-04-24 15:57:36 0:52:58 0:41:09 0:11:49 smithi main rhel 8.4 orch/cephadm/with-work/{0-distro/rhel_8.4_container_tools_3.0 fixed-2 mode/packaged mon_election/classic msgr/async-v2only start tasks/rados_api_tests} 2
pass 7249292 2023-04-23 19:01:32 2023-04-24 15:10:09 2023-04-24 15:44:27 0:34:18 0:27:34 0:06:44 smithi main rhel 8.4 orch/cephadm/smoke/{0-nvme-loop distro/rhel_8.4_container_tools_rhel8 fixed-2 mon_election/connectivity start} 2
pass 7249293 2023-04-23 19:01:33 2023-04-24 15:11:09 2023-04-24 17:19:24 2:08:15 1:36:55 0:31:20 smithi main centos 8.stream orch/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/classic task/test_iscsi_pids_limit} 1
pass 7249294 2023-04-23 19:01:34 2023-04-24 15:11:50 2023-04-24 16:06:16 0:54:26 0:43:43 0:10:43 smithi main ubuntu 18.04 orch/cephadm/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/nautilus-v2only backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{ubuntu_18.04} mon_election/classic msgr-failures/few rados thrashers/careful thrashosds-health workloads/cache-snaps} 3
pass 7249295 2023-04-23 19:01:34 2023-04-24 15:12:50 2023-04-24 15:41:06 0:28:16 0:18:30 0:09:46 smithi main ubuntu 20.04 orch/cephadm/osds/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-ops/rm-zap-flag} 2
pass 7249296 2023-04-23 19:01:35 2023-04-24 15:13:10 2023-04-24 15:44:20 0:31:10 0:22:20 0:08:50 smithi main rhel 8.4 orch/cephadm/smoke-roleless/{0-distro/rhel_8.4_container_tools_3.0 0-nvme-loop 1-start 2-services/nfs-ingress-rgw-user 3-final} 2
pass 7249297 2023-04-23 19:01:36 2023-04-24 15:13:51 2023-04-24 17:36:08 2:22:17 1:50:58 0:31:19 smithi main centos 8.stream orch/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-inline/no 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
fail 7249298 2023-04-23 19:01:37 2023-04-24 15:14:11 2023-04-24 15:43:30 0:29:19 smithi main centos 8.stream orch/cephadm/upgrade/{1-start-distro/1-start-centos_8.stream_container-tools 2-repo_digest/repo_digest 3-upgrade/staggered 4-wait 5-upgrade-ls mon_election/connectivity} 2
Failure Reason:

Command failed on smithi125 with status 1: 'sudo yum install -y kernel'

pass 7249299 2023-04-23 19:01:38 2023-04-24 15:16:22 2023-04-24 17:35:42 2:19:20 1:50:18 0:29:02 smithi main centos 8.stream orch/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/connectivity task/test_nfs} 1
pass 7249300 2023-04-23 19:01:38 2023-04-24 15:16:22 2023-04-24 15:48:31 0:32:09 0:21:20 0:10:49 smithi main ubuntu 18.04 orch/cephadm/smoke/{0-nvme-loop distro/ubuntu_18.04 fixed-2 mon_election/classic start} 2
pass 7249301 2023-04-23 19:01:39 2023-04-24 15:16:53 2023-04-24 15:43:37 0:26:44 0:19:32 0:07:12 smithi main rhel 8.4 orch/cephadm/smoke-roleless/{0-distro/rhel_8.4_container_tools_rhel8 0-nvme-loop 1-start 2-services/nfs-ingress 3-final} 2
fail 7249302 2023-04-23 19:01:40 2023-04-24 15:18:33 2023-04-24 15:38:44 0:20:11 smithi main ubuntu 18.04 orch/cephadm/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/nautilus backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{ubuntu_18.04} mon_election/connectivity msgr-failures/osd-delay rados thrashers/default thrashosds-health workloads/radosbench} 3
Failure Reason:

Cannot connect to remote host smithi178

pass 7249303 2023-04-23 19:01:41 2023-04-24 15:20:34 2023-04-24 18:04:34 2:44:00 2:09:11 0:34:49 smithi main centos 8.stream orch/cephadm/thrash/{0-distro/centos_8.stream_container_tools 1-start 2-thrash 3-tasks/radosbench fixed-2 msgr/async root} 2
fail 7249304 2023-04-23 19:01:41 2023-04-24 15:24:35 2023-04-24 15:47:11 0:22:36 0:15:12 0:07:24 smithi main rhel 8.4 orch/cephadm/with-work/{0-distro/rhel_8.4_container_tools_rhel8 fixed-2 mode/root mon_election/connectivity msgr/async start tasks/rados_python} 2
Failure Reason:

Command failed on smithi143 with status 127: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:ad9576de6cbdb0047be604610f5b00c42ad65335 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid ca4d71b6-e2b6-11ed-9b00-001a4aab830c -- ceph mon dump -f json'

pass 7249305 2023-04-23 19:01:42 2023-04-24 15:25:36 2023-04-24 17:32:14 2:06:38 1:35:12 0:31:26 smithi main centos 8.stream orch/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/classic task/test_orch_cli} 1