User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Pass | Fail | Dead |
---|---|---|---|---|---|---|---|---|---|---|---|
adking | 2022-12-15 05:14:03 | 2022-12-15 18:48:50 | 2022-12-15 18:49:04 | 0:00:14 | orch:cephadm | wip-adk-testing-2022-12-14-1659 | smithi | ea64f72 | 46 | 5 | 46 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
pass | 7117680 | 2022-12-15 05:15:33 | 2022-12-15 05:41:56 | 2022-12-15 06:07:04 | 0:25:08 | 0:14:56 | 0:10:12 | smithi | main | centos | 8.stream | orch:cephadm/osds/{0-distro/centos_8.stream_container_tools 0-nvme-loop 1-start 2-ops/rm-zap-add} | 2 | |
fail | 7117681 | 2022-12-15 05:15:39 | 2022-12-15 05:45:03 | 2022-12-15 06:01:42 | 0:16:39 | 0:06:54 | 0:09:45 | smithi | main | centos | 8.stream | orch:cephadm/workunits/{0-distro/rhel_8.6_container_tools_3.0 agent/on mon_election/connectivity task/test_iscsi_pids_limit/{centos_8.stream_container_tools test_iscsi_pids_limit}} | 1 | |
Failure Reason:
Command failed on smithi183 with status 1: 'TESTDIR=/home/ubuntu/cephtest bash -s' |
||||||||||||||
pass | 7117682 | 2022-12-15 05:15:50 | 2022-12-15 05:45:03 | 2022-12-15 06:27:18 | 0:42:15 | 0:28:04 | 0:14:11 | smithi | main | rhel | 8.6 | orch:cephadm/with-work/{0-distro/rhel_8.6_container_tools_3.0 fixed-2 mode/packaged mon_election/classic msgr/async-v1only start tasks/rados_python} | 2 | |
dead | 7117683 | 2022-12-15 05:15:57 | 2022-12-15 05:48:52 | 2022-12-15 05:52:53 | 0:04:01 | smithi | main | centos | 8.stream | orch:cephadm/smoke-roleless/{0-distro/centos_8.stream_container_tools_crun 0-nvme-loop 1-start 2-services/jaeger 3-final} | 2 | |||
Failure Reason:
Error reimaging machines: Failed to power on smithi049 |
||||||||||||||
pass | 7117684 | 2022-12-15 05:15:59 | 2022-12-15 05:50:54 | 2022-12-15 06:33:22 | 0:42:28 | 0:33:29 | 0:08:59 | smithi | main | centos | 8.stream | orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} fail_fs/no overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn syntax} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-inline/no 4-verify} 2-client 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} | 2 | |
pass | 7117685 | 2022-12-15 05:16:15 | 2022-12-15 05:51:28 | 2022-12-15 06:41:25 | 0:49:57 | 0:37:59 | 0:11:58 | smithi | main | centos | 8.stream | orch:cephadm/mgr-nfs-upgrade/{0-centos_8.stream_container_tools 1-bootstrap/16.2.0 1-start 2-nfs 3-upgrade-with-workload 4-final} | 2 | |
pass | 7117686 | 2022-12-15 05:16:21 | 2022-12-15 05:52:12 | 2022-12-15 06:20:02 | 0:27:50 | 0:18:12 | 0:09:38 | smithi | main | rhel | 8.6 | orch:cephadm/orchestrator_cli/{0-random-distro$/{rhel_8.6_container_tools_3.0} 2-node-mgr agent/off orchestrator_cli} | 2 | |
pass | 7117687 | 2022-12-15 05:16:28 | 2022-12-15 05:52:37 | 2022-12-15 06:39:37 | 0:47:00 | 0:35:03 | 0:11:57 | smithi | main | centos | 8.stream | orch:cephadm/rbd_iscsi/{0-single-container-host base/install cluster/{fixed-3 openstack} workloads/cephadm_iscsi} | 3 | |
pass | 7117688 | 2022-12-15 05:16:34 | 2022-12-15 05:54:18 | 2022-12-15 06:19:04 | 0:24:46 | 0:16:30 | 0:08:16 | smithi | main | rhel | 8.6 | orch:cephadm/smoke-singlehost/{0-random-distro$/{rhel_8.6_container_tools_3.0} 1-start 2-services/basic 3-final} | 1 | |
fail | 7117689 | 2022-12-15 05:16:50 | 2022-12-15 05:54:25 | 2022-12-15 07:00:29 | 1:06:04 | 0:55:29 | 0:10:35 | smithi | main | centos | 8.stream | orch:cephadm/upgrade/{1-start-distro/1-start-centos_8.stream_container-tools 2-repo_digest/defaut 3-upgrade/staggered 4-wait 5-upgrade-ls agent/off mon_election/classic} | 2 | |
Failure Reason:
Command failed on smithi134 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v16.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 0a95d576-7c3f-11ed-8443-001a4aab830c -e sha1=ea64f728a78d3a1b70a28c2c8fc745b87adee975 -- bash -c \'ceph versions | jq -e \'"\'"\'.osd | length == 2\'"\'"\'\'' |
||||||||||||||
pass | 7117690 | 2022-12-15 05:16:52 | 2022-12-15 05:54:40 | 2022-12-15 06:34:52 | 0:40:12 | 0:29:57 | 0:10:15 | smithi | main | rhel | 8.6 | orch:cephadm/with-work/{0-distro/rhel_8.6_container_tools_rhel8 fixed-2 mode/root mon_election/connectivity msgr/async-v2only start tasks/rotate-keys} | 2 | |
pass | 7117691 | 2022-12-15 05:16:54 | 2022-12-15 05:54:51 | 2022-12-15 06:33:33 | 0:38:42 | 0:31:20 | 0:07:22 | smithi | main | rhel | 8.6 | orch:cephadm/workunits/{0-distro/rhel_8.6_container_tools_rhel8 agent/off mon_election/classic task/test_nfs} | 1 | |
pass | 7117692 | 2022-12-15 05:17:00 | 2022-12-15 05:54:51 | 2022-12-15 06:22:06 | 0:27:15 | 0:17:51 | 0:09:24 | smithi | main | rhel | 8.6 | orch:cephadm/smoke-roleless/{0-distro/rhel_8.6_container_tools_3.0 0-nvme-loop 1-start 2-services/mirror 3-final} | 2 | |
dead | 7117693 | 2022-12-15 05:17:02 | 2022-12-15 05:54:52 | 2022-12-15 05:57:19 | 0:02:27 | smithi | main | rhel | 8.6 | orch:cephadm/smoke/{0-distro/rhel_8.6_container_tools_rhel8 0-nvme-loop agent/off fixed-2 mon_election/classic start} | 2 | |||
Failure Reason:
Error reimaging machines: Failed to power on smithi092 |
||||||||||||||
pass | 7117694 | 2022-12-15 05:17:04 | 2022-12-15 05:54:52 | 2022-12-15 06:45:17 | 0:50:25 | 0:40:28 | 0:09:57 | smithi | main | rhel | 8.6 | orch:cephadm/thrash/{0-distro/rhel_8.6_container_tools_rhel8 1-start 2-thrash 3-tasks/rados_api_tests fixed-2 msgr/async root} | 2 | |
pass | 7117696 | 2022-12-15 05:17:11 | 2022-12-15 05:56:17 | 2022-12-15 06:54:13 | 0:57:56 | 0:44:51 | 0:13:05 | smithi | main | ubuntu | 20.04 | orch:cephadm/with-work/{0-distro/ubuntu_20.04 fixed-2 mode/packaged mon_election/classic msgr/async start tasks/rados_api_tests} | 2 | |
pass | 7117697 | 2022-12-15 05:17:17 | 2022-12-15 05:58:03 | 2022-12-15 06:30:42 | 0:32:39 | 0:19:36 | 0:13:03 | smithi | main | ubuntu | 20.04 | orch:cephadm/workunits/{0-distro/ubuntu_20.04 agent/on mon_election/connectivity task/test_orch_cli} | 1 | |
pass | 7117698 | 2022-12-15 05:17:23 | 2022-12-15 05:58:33 | 2022-12-15 06:25:28 | 0:26:55 | 0:18:42 | 0:08:13 | smithi | main | rhel | 8.6 | orch:cephadm/smoke-roleless/{0-distro/rhel_8.6_container_tools_rhel8 0-nvme-loop 1-start 2-services/nfs-ingress-rgw-bucket 3-final} | 2 | |
pass | 7117699 | 2022-12-15 05:17:29 | 2022-12-15 05:58:49 | 2022-12-15 06:35:55 | 0:37:06 | 0:26:43 | 0:10:23 | smithi | main | centos | 8.stream | orch:cephadm/with-work/{0-distro/centos_8.stream_container_tools fixed-2 mode/root mon_election/connectivity msgr/async-v1only start tasks/rados_python} | 2 | |
pass | 7117700 | 2022-12-15 05:17:40 | 2022-12-15 05:59:19 | 2022-12-15 06:25:43 | 0:26:24 | 0:15:45 | 0:10:39 | smithi | main | centos | 8.stream | orch:cephadm/osds/{0-distro/centos_8.stream_container_tools_crun 0-nvme-loop 1-start 2-ops/rm-zap-flag} | 2 | |
pass | 7117701 | 2022-12-15 05:17:52 | 2022-12-15 05:59:47 | 2022-12-15 06:42:33 | 0:42:46 | 0:33:40 | 0:09:06 | smithi | main | centos | 8.stream | orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} fail_fs/yes overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn syntax} roles tasks/{0-from/pacific 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-inline/yes 4-verify} 2-client 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} | 2 | |
pass | 7117702 | 2022-12-15 05:17:56 | 2022-12-15 05:59:47 | 2022-12-15 06:38:37 | 0:38:50 | 0:27:25 | 0:11:25 | smithi | main | centos | 8.stream | orch:cephadm/workunits/{0-distro/centos_8.stream_container_tools agent/off mon_election/classic task/test_orch_cli_mon} | 5 | |
pass | 7117703 | 2022-12-15 05:18:07 | 2022-12-15 05:59:53 | 2022-12-15 06:36:59 | 0:37:06 | 0:27:10 | 0:09:56 | smithi | main | centos | 8.stream | orch:cephadm/with-work/{0-distro/centos_8.stream_container_tools_crun fixed-2 mode/packaged mon_election/classic msgr/async-v2only start tasks/rotate-keys} | 2 | |
pass | 7117704 | 2022-12-15 05:18:08 | 2022-12-15 05:59:59 | 2022-12-15 06:32:53 | 0:32:54 | 0:20:26 | 0:12:28 | smithi | main | ubuntu | 20.04 | orch:cephadm/smoke-roleless/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-services/nfs-ingress-rgw-user 3-final} | 2 | |
pass | 7117705 | 2022-12-15 05:18:09 | 2022-12-15 06:00:00 | 2022-12-15 06:37:12 | 0:37:12 | 0:24:43 | 0:12:29 | smithi | main | ubuntu | 20.04 | orch:cephadm/smoke/{0-distro/ubuntu_20.04 0-nvme-loop agent/on fixed-2 mon_election/connectivity start} | 2 | |
pass | 7117706 | 2022-12-15 05:18:16 | 2022-12-15 06:00:15 | 2022-12-15 07:14:41 | 1:14:26 | 1:00:10 | 0:14:16 | smithi | main | ubuntu | 20.04 | orch:cephadm/thrash/{0-distro/ubuntu_20.04 1-start 2-thrash 3-tasks/radosbench fixed-2 msgr/async-v1only root} | 2 | |
pass | 7117707 | 2022-12-15 05:18:22 | 2022-12-15 06:00:38 | 2022-12-15 06:53:50 | 0:53:12 | 0:41:08 | 0:12:04 | smithi | main | rhel | 8.6 | orch:cephadm/with-work/{0-distro/rhel_8.6_container_tools_3.0 fixed-2 mode/root mon_election/connectivity msgr/async start tasks/rados_api_tests} | 2 | |
pass | 7117708 | 2022-12-15 05:18:28 | 2022-12-15 06:01:13 | 2022-12-15 06:23:54 | 0:22:41 | 0:14:01 | 0:08:40 | smithi | main | centos | 8.stream | orch:cephadm/workunits/{0-distro/centos_8.stream_container_tools_crun agent/on mon_election/connectivity task/test_adoption} | 1 | |
pass | 7117709 | 2022-12-15 05:18:45 | 2022-12-15 06:01:33 | 2022-12-15 06:30:41 | 0:29:08 | 0:18:01 | 0:11:07 | smithi | main | centos | 8.stream | orch:cephadm/smoke-roleless/{0-distro/centos_8.stream_container_tools 0-nvme-loop 1-start 2-services/nfs-ingress 3-final} | 2 | |
dead | 7117710 | 2022-12-15 05:18:54 | 2022-12-15 06:01:39 | 2022-12-15 06:05:46 | 0:04:07 | smithi | main | rhel | 8.6 | orch:cephadm/with-work/{0-distro/rhel_8.6_container_tools_rhel8 fixed-2 mode/packaged mon_election/classic msgr/async-v1only start tasks/rados_python} | 2 | |||
Failure Reason:
Error reimaging machines: Failed to power on smithi163 |
||||||||||||||
dead | 7117711 | 2022-12-15 05:19:01 | 2022-12-15 06:02:14 | 2022-12-15 06:55:09 | 0:52:55 | smithi | main | centos | 8.stream | orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} fail_fs/no overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn syntax} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-inline/no 4-verify} 2-client 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} | 2 | |||
Failure Reason:
Error reimaging machines: reached maximum tries (100) after waiting for 600 seconds |
||||||||||||||
dead | 7117712 | 2022-12-15 05:19:05 | 2022-12-15 06:02:50 | 2022-12-15 06:14:32 | 0:11:42 | 0:02:54 | 0:08:48 | smithi | main | ubuntu | 20.04 | orch:cephadm/upgrade/{1-start-distro/1-start-ubuntu_20.04 2-repo_digest/repo_digest 3-upgrade/simple 4-wait 5-upgrade-ls agent/on mon_election/connectivity} | 2 | |
Failure Reason:
{'smithi103.front.sepia.ceph.com': {'changed': False, 'msg': 'All items completed', 'results': [{'_ansible_item_label': {'key': 'vg_nvme', 'value': {'pvs': '/dev/nvme0n1'}}, '_ansible_no_log': False, 'ansible_loop_var': 'item', 'changed': False, 'err': " /dev/vg_nvme: already exists in filesystem\n Run `vgcreate --help' for more information.\n", 'failed': True, 'invocation': {'module_args': {'force': False, 'pesize': '4', 'pv_options': '', 'pvresize': False, 'pvs': ['/dev/nvme0n1'], 'state': 'present', 'vg': 'vg_nvme', 'vg_options': ''}}, 'item': {'key': 'vg_nvme', 'value': {'pvs': '/dev/nvme0n1'}}, 'msg': "Creating volume group 'vg_nvme' failed", 'rc': 3}]}, 'smithi157.front.sepia.ceph.com': {'changed': False, 'msg': 'All items completed', 'results': [{'_ansible_item_label': {'key': 'vg_nvme', 'value': {'pvs': '/dev/nvme0n1'}}, '_ansible_no_log': False, 'ansible_loop_var': 'item', 'changed': False, 'err': " /dev/vg_nvme: already exists in filesystem\n Run `vgcreate --help' for more information.\n", 'failed': True, 'invocation': {'module_args': {'force': False, 'pesize': '4', 'pv_options': '', 'pvresize': False, 'pvs': ['/dev/nvme0n1'], 'state': 'present', 'vg': 'vg_nvme', 'vg_options': ''}}, 'item': {'key': 'vg_nvme', 'value': {'pvs': '/dev/nvme0n1'}}, 'msg': "Creating volume group 'vg_nvme' failed", 'rc': 3}]}} |
||||||||||||||
dead | 7117713 | 2022-12-15 05:19:11 | 2022-12-15 06:02:55 | 2022-12-15 06:12:21 | 0:09:26 | 0:02:17 | 0:07:09 | smithi | main | rhel | 8.6 | orch:cephadm/workunits/{0-distro/rhel_8.6_container_tools_3.0 agent/off mon_election/classic task/test_cephadm} | 1 | |
Failure Reason:
{'smithi130.front.sepia.ceph.com': {'changed': False, 'msg': 'All items completed', 'results': [{'_ansible_item_label': {'key': 'vg_nvme', 'value': {'pvs': '/dev/nvme0n1'}}, '_ansible_no_log': False, 'ansible_loop_var': 'item', 'changed': False, 'err': " /dev/vg_nvme: already exists in filesystem\n Run `vgcreate --help' for more information.\n", 'failed': True, 'invocation': {'module_args': {'force': False, 'pesize': '4', 'pv_options': '', 'pvresize': False, 'pvs': ['/dev/nvme0n1'], 'state': 'present', 'vg': 'vg_nvme', 'vg_options': ''}}, 'item': {'key': 'vg_nvme', 'value': {'pvs': '/dev/nvme0n1'}}, 'msg': "Creating volume group 'vg_nvme' failed", 'rc': 3}]}} |
||||||||||||||
dead | 7117714 | 2022-12-15 05:19:17 | 2022-12-15 06:03:20 | 2022-12-15 06:14:53 | 0:11:33 | 0:03:19 | 0:08:14 | smithi | main | rhel | 8.6 | orch:cephadm/osds/{0-distro/rhel_8.6_container_tools_3.0 0-nvme-loop 1-start 2-ops/rm-zap-wait} | 2 | |
Failure Reason:
{'smithi018.front.sepia.ceph.com': {'changed': False, 'msg': 'All items completed', 'results': [{'_ansible_item_label': {'key': 'vg_nvme', 'value': {'pvs': '/dev/nvme0n1'}}, '_ansible_no_log': False, 'ansible_loop_var': 'item', 'changed': False, 'err': " /dev/vg_nvme: already exists in filesystem\n Run `vgcreate --help' for more information.\n", 'failed': True, 'invocation': {'module_args': {'force': False, 'pesize': '4', 'pv_options': '', 'pvresize': False, 'pvs': ['/dev/nvme0n1'], 'state': 'present', 'vg': 'vg_nvme', 'vg_options': ''}}, 'item': {'key': 'vg_nvme', 'value': {'pvs': '/dev/nvme0n1'}}, 'msg': "Creating volume group 'vg_nvme' failed", 'rc': 3}]}, 'smithi196.front.sepia.ceph.com': {'changed': False, 'msg': 'All items completed', 'results': [{'_ansible_item_label': {'key': 'vg_nvme', 'value': {'pvs': '/dev/nvme0n1'}}, '_ansible_no_log': False, 'ansible_loop_var': 'item', 'changed': False, 'err': " /dev/vg_nvme: already exists in filesystem\n Run `vgcreate --help' for more information.\n", 'failed': True, 'invocation': {'module_args': {'force': False, 'pesize': '4', 'pv_options': '', 'pvresize': False, 'pvs': ['/dev/nvme0n1'], 'state': 'present', 'vg': 'vg_nvme', 'vg_options': ''}}, 'item': {'key': 'vg_nvme', 'value': {'pvs': '/dev/nvme0n1'}}, 'msg': "Creating volume group 'vg_nvme' failed", 'rc': 3}]}} |
||||||||||||||
dead | 7117715 | 2022-12-15 05:19:24 | 2022-12-15 06:03:31 | 2022-12-15 06:15:11 | 0:11:40 | 0:03:02 | 0:08:38 | smithi | main | ubuntu | 20.04 | orch:cephadm/with-work/{0-distro/ubuntu_20.04 fixed-2 mode/root mon_election/connectivity msgr/async-v2only start tasks/rotate-keys} | 2 | |
Failure Reason:
{'smithi102.front.sepia.ceph.com': {'changed': False, 'msg': 'All items completed', 'results': [{'_ansible_item_label': {'key': 'vg_nvme', 'value': {'pvs': '/dev/nvme0n1'}}, '_ansible_no_log': False, 'ansible_loop_var': 'item', 'changed': False, 'err': " /dev/vg_nvme: already exists in filesystem\n Run `vgcreate --help' for more information.\n", 'failed': True, 'invocation': {'module_args': {'force': False, 'pesize': '4', 'pv_options': '', 'pvresize': False, 'pvs': ['/dev/nvme0n1'], 'state': 'present', 'vg': 'vg_nvme', 'vg_options': ''}}, 'item': {'key': 'vg_nvme', 'value': {'pvs': '/dev/nvme0n1'}}, 'msg': "Creating volume group 'vg_nvme' failed", 'rc': 3}]}, 'smithi111.front.sepia.ceph.com': {'changed': False, 'msg': 'All items completed', 'results': [{'_ansible_item_label': {'key': 'vg_nvme', 'value': {'pvs': '/dev/nvme0n1'}}, '_ansible_no_log': False, 'ansible_loop_var': 'item', 'changed': False, 'err': " /dev/vg_nvme: already exists in filesystem\n Run `vgcreate --help' for more information.\n", 'failed': True, 'invocation': {'module_args': {'force': False, 'pesize': '4', 'pv_options': '', 'pvresize': False, 'pvs': ['/dev/nvme0n1'], 'state': 'present', 'vg': 'vg_nvme', 'vg_options': ''}}, 'item': {'key': 'vg_nvme', 'value': {'pvs': '/dev/nvme0n1'}}, 'msg': "Creating volume group 'vg_nvme' failed", 'rc': 3}]}} |
||||||||||||||
dead | 7117716 | 2022-12-15 05:19:30 | 2022-12-15 06:03:49 | 2022-12-15 06:14:10 | 0:10:21 | 0:03:15 | 0:07:06 | smithi | main | centos | 8.stream | orch:cephadm/smoke-roleless/{0-distro/centos_8.stream_container_tools_crun 0-nvme-loop 1-start 2-services/nfs-ingress2 3-final} | 2 | |
Failure Reason:
{'smithi083.front.sepia.ceph.com': {'changed': False, 'msg': 'All items completed', 'results': [{'_ansible_item_label': {'key': 'vg_nvme', 'value': {'pvs': '/dev/nvme0n1'}}, '_ansible_no_log': False, 'ansible_loop_var': 'item', 'changed': False, 'err': " /dev/vg_nvme: already exists in filesystem\n Run `vgcreate --help' for more information.\n", 'failed': True, 'invocation': {'module_args': {'force': False, 'pesize': '4', 'pv_options': '', 'pvresize': False, 'pvs': ['/dev/nvme0n1'], 'state': 'present', 'vg': 'vg_nvme', 'vg_options': ''}}, 'item': {'key': 'vg_nvme', 'value': {'pvs': '/dev/nvme0n1'}}, 'msg': "Creating volume group 'vg_nvme' failed", 'rc': 3}]}, 'smithi067.front.sepia.ceph.com': {'changed': False, 'msg': 'All items completed', 'results': [{'_ansible_item_label': {'key': 'vg_nvme', 'value': {'pvs': '/dev/nvme0n1'}}, '_ansible_no_log': False, 'ansible_loop_var': 'item', 'changed': False, 'err': " /dev/vg_nvme: already exists in filesystem\n Run `vgcreate --help' for more information.\n", 'failed': True, 'invocation': {'module_args': {'force': False, 'pesize': '4', 'pv_options': '', 'pvresize': False, 'pvs': ['/dev/nvme0n1'], 'state': 'present', 'vg': 'vg_nvme', 'vg_options': ''}}, 'item': {'key': 'vg_nvme', 'value': {'pvs': '/dev/nvme0n1'}}, 'msg': "Creating volume group 'vg_nvme' failed", 'rc': 3}]}} |
||||||||||||||
dead | 7117717 | 2022-12-15 05:19:40 | 2022-12-15 06:03:49 | 2022-12-15 06:15:36 | 0:11:47 | 0:04:04 | 0:07:43 | smithi | main | rhel | 8.6 | orch:cephadm/workunits/{0-distro/rhel_8.6_container_tools_rhel8 agent/on mon_election/connectivity task/test_cephadm_repos} | 1 | |
Failure Reason:
{'smithi137.front.sepia.ceph.com': {'changed': False, 'msg': 'All items completed', 'results': [{'_ansible_item_label': {'key': 'vg_nvme', 'value': {'pvs': '/dev/nvme0n1'}}, '_ansible_no_log': False, 'ansible_loop_var': 'item', 'changed': False, 'err': " /dev/vg_nvme: already exists in filesystem\n Run `vgcreate --help' for more information.\n", 'failed': True, 'invocation': {'module_args': {'force': False, 'pesize': '4', 'pv_options': '', 'pvresize': False, 'pvs': ['/dev/nvme0n1'], 'state': 'present', 'vg': 'vg_nvme', 'vg_options': ''}}, 'item': {'key': 'vg_nvme', 'value': {'pvs': '/dev/nvme0n1'}}, 'msg': "Creating volume group 'vg_nvme' failed", 'rc': 3}]}} |
||||||||||||||
dead | 7117718 | 2022-12-15 05:19:51 | 2022-12-15 06:03:50 | 2022-12-15 06:17:23 | 0:13:33 | 0:03:44 | 0:09:49 | smithi | main | centos | 8.stream | orch:cephadm/mgr-nfs-upgrade/{0-centos_8.stream_container_tools 1-bootstrap/16.2.4 1-start 2-nfs 3-upgrade-with-workload 4-final} | 2 | |
Failure Reason:
{'smithi106.front.sepia.ceph.com': {'changed': False, 'msg': 'All items completed', 'results': [{'_ansible_item_label': {'key': 'vg_nvme', 'value': {'pvs': '/dev/nvme0n1'}}, '_ansible_no_log': False, 'ansible_loop_var': 'item', 'changed': False, 'err': " /dev/vg_nvme: already exists in filesystem\n Run `vgcreate --help' for more information.\n", 'failed': True, 'invocation': {'module_args': {'force': False, 'pesize': '4', 'pv_options': '', 'pvresize': False, 'pvs': ['/dev/nvme0n1'], 'state': 'present', 'vg': 'vg_nvme', 'vg_options': ''}}, 'item': {'key': 'vg_nvme', 'value': {'pvs': '/dev/nvme0n1'}}, 'msg': "Creating volume group 'vg_nvme' failed", 'rc': 3}]}, 'smithi085.front.sepia.ceph.com': {'changed': False, 'msg': 'All items completed', 'results': [{'_ansible_item_label': {'key': 'vg_nvme', 'value': {'pvs': '/dev/nvme0n1'}}, '_ansible_no_log': False, 'ansible_loop_var': 'item', 'changed': False, 'err': " /dev/vg_nvme: already exists in filesystem\n Run `vgcreate --help' for more information.\n", 'failed': True, 'invocation': {'module_args': {'force': False, 'pesize': '4', 'pv_options': '', 'pvresize': False, 'pvs': ['/dev/nvme0n1'], 'state': 'present', 'vg': 'vg_nvme', 'vg_options': ''}}, 'item': {'key': 'vg_nvme', 'value': {'pvs': '/dev/nvme0n1'}}, 'msg': "Creating volume group 'vg_nvme' failed", 'rc': 3}]}} |
||||||||||||||
fail | 7117719 | 2022-12-15 05:20:08 | 2022-12-15 06:03:55 | 2022-12-15 06:31:28 | 0:27:33 | 0:16:25 | 0:11:08 | smithi | main | centos | 8.stream | orch:cephadm/smoke/{0-distro/centos_8.stream_container_tools 0-nvme-loop agent/on fixed-2 mon_election/classic start} | 2 | |
Failure Reason:
Command failed on smithi105 with status 127: "sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:ea64f728a78d3a1b70a28c2c8fc745b87adee975 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid c86748ae-7c40-11ed-8443-001a4aab830c -- ceph orch apply prometheus '1;smithi105=a'" |
||||||||||||||
dead | 7117720 | 2022-12-15 05:20:24 | 2022-12-15 06:06:16 | 2022-12-15 06:09:00 | 0:02:44 | smithi | main | centos | 8.stream | orch:cephadm/thrash/{0-distro/centos_8.stream_container_tools 1-start 2-thrash 3-tasks/small-objects fixed-2 msgr/async-v2only root} | 2 | |||
Failure Reason:
Error reimaging machines: Failed to power on smithi165 |
||||||||||||||
pass | 7117721 | 2022-12-15 05:20:40 | 2022-12-15 06:06:46 | 2022-12-15 06:56:19 | 0:49:33 | 0:38:33 | 0:11:00 | smithi | main | centos | 8.stream | orch:cephadm/with-work/{0-distro/centos_8.stream_container_tools fixed-2 mode/packaged mon_election/classic msgr/async-v2only start tasks/rados_api_tests} | 2 | |
pass | 7117722 | 2022-12-15 05:20:56 | 2022-12-15 06:35:38 | 1073 | smithi | main | rhel | 8.6 | orch:cephadm/smoke-roleless/{0-distro/rhel_8.6_container_tools_3.0 0-nvme-loop 1-start 2-services/nfs 3-final} | 2 | ||||
pass | 7117723 | 2022-12-15 05:21:07 | 2022-12-15 06:08:16 | 2022-12-15 06:33:07 | 0:24:51 | 0:16:14 | 0:08:37 | smithi | main | centos | 8.stream | orch:cephadm/workunits/{0-distro/ubuntu_20.04 agent/off mon_election/classic task/test_iscsi_pids_limit/{centos_8.stream_container_tools test_iscsi_pids_limit}} | 1 | |
pass | 7117724 | 2022-12-15 05:21:14 | 2022-12-15 06:08:17 | 2022-12-15 06:53:12 | 0:44:55 | 0:34:37 | 0:10:18 | smithi | main | centos | 8.stream | orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} fail_fs/yes overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn syntax} roles tasks/{0-from/pacific 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-inline/yes 4-verify} 2-client 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} | 2 | |
pass | 7117725 | 2022-12-15 05:21:25 | 2022-12-15 06:08:22 | 2022-12-15 06:44:40 | 0:36:18 | 0:26:30 | 0:09:48 | smithi | main | centos | 8.stream | orch:cephadm/with-work/{0-distro/centos_8.stream_container_tools_crun fixed-2 mode/root mon_election/connectivity msgr/async start tasks/rados_python} | 2 | |
pass | 7117726 | 2022-12-15 05:21:36 | 2022-12-15 06:08:28 | 2022-12-15 06:36:38 | 0:28:10 | 0:17:15 | 0:10:55 | smithi | main | rhel | 8.6 | orch:cephadm/smoke-roleless/{0-distro/rhel_8.6_container_tools_rhel8 0-nvme-loop 1-start 2-services/nfs2 3-final} | 2 | |
pass | 7117727 | 2022-12-15 05:21:42 | 2022-12-15 06:09:51 | 2022-12-15 06:50:03 | 0:40:12 | 0:28:40 | 0:11:32 | smithi | main | rhel | 8.6 | orch:cephadm/with-work/{0-distro/rhel_8.6_container_tools_3.0 fixed-2 mode/packaged mon_election/classic msgr/async-v1only start tasks/rotate-keys} | 2 | |
fail | 7117728 | 2022-12-15 05:21:53 | 2022-12-15 06:10:47 | 2022-12-15 06:37:41 | 0:26:54 | 0:17:59 | 0:08:55 | smithi | main | centos | 8.stream | orch:cephadm/workunits/{0-distro/centos_8.stream_container_tools agent/on mon_election/connectivity task/test_nfs} | 1 | |
Failure Reason:
Test failure: test_cluster_set_user_config_with_non_existing_clusterid (tasks.cephfs.test_nfs.TestNFS) |
||||||||||||||
pass | 7117729 | 2022-12-15 05:22:09 | 2022-12-15 06:11:21 | 2022-12-15 06:41:43 | 0:30:22 | 0:17:28 | 0:12:54 | smithi | main | rhel | 8.6 | orch:cephadm/osds/{0-distro/rhel_8.6_container_tools_rhel8 0-nvme-loop 1-start 2-ops/rmdir-reactivate} | 2 | |
pass | 7117732 | 2022-12-15 05:22:15 | 2022-12-15 06:15:32 | 2022-12-15 06:41:29 | 0:25:57 | 0:16:26 | 0:09:31 | smithi | main | centos | 8.stream | orch:cephadm/smoke/{0-distro/centos_8.stream_container_tools_crun 0-nvme-loop agent/off fixed-2 mon_election/connectivity start} | 2 | |
pass | 7117733 | 2022-12-15 05:22:21 | 2022-12-15 06:15:59 | 2022-12-15 07:04:15 | 0:48:16 | 0:38:34 | 0:09:42 | smithi | main | centos | 8.stream | orch:cephadm/thrash/{0-distro/centos_8.stream_container_tools_crun 1-start 2-thrash 3-tasks/snaps-few-objects fixed-2 msgr/async root} | 2 | |
pass | 7117734 | 2022-12-15 05:22:27 | 2022-12-15 06:15:59 | 2022-12-15 07:04:38 | 0:48:39 | 0:40:00 | 0:08:39 | smithi | main | rhel | 8.6 | orch:cephadm/with-work/{0-distro/rhel_8.6_container_tools_rhel8 fixed-2 mode/root mon_election/connectivity msgr/async-v2only start tasks/rados_api_tests} | 2 | |
pass | 7117736 | 2022-12-15 05:22:44 | 2022-12-15 06:16:10 | 2022-12-15 06:53:15 | 0:37:05 | 0:22:36 | 0:14:29 | smithi | main | ubuntu | 20.04 | orch:cephadm/smoke-roleless/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-services/rgw-ingress 3-final} | 2 | |
pass | 7117738 | 2022-12-15 05:22:53 | 2022-12-15 06:16:45 | 2022-12-15 06:42:35 | 0:25:50 | 0:16:22 | 0:09:28 | smithi | main | centos | 8.stream | orch:cephadm/workunits/{0-distro/centos_8.stream_container_tools_crun agent/off mon_election/classic task/test_orch_cli} | 1 | |
pass | 7117740 | 2022-12-15 05:22:54 | 2022-12-15 06:17:36 | 2022-12-15 07:00:28 | 0:42:52 | 0:33:13 | 0:09:39 | smithi | main | centos | 8.stream | orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} fail_fs/no overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn syntax} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-inline/no 4-verify} 2-client 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} | 2 | |
pass | 7117742 | 2022-12-15 05:22:55 | 2022-12-15 06:18:12 | 2022-12-15 06:44:39 | 0:26:27 | 0:12:20 | 0:14:07 | smithi | main | ubuntu | 20.04 | orch:cephadm/orchestrator_cli/{0-random-distro$/{ubuntu_20.04} 2-node-mgr agent/on orchestrator_cli} | 2 | |
pass | 7117743 | 2022-12-15 05:22:56 | 2022-12-15 06:19:31 | 2022-12-15 06:44:39 | 0:25:08 | 0:16:12 | 0:08:56 | smithi | main | rhel | 8.6 | orch:cephadm/smoke-singlehost/{0-random-distro$/{rhel_8.6_container_tools_rhel8} 1-start 2-services/rgw 3-final} | 1 | |
fail | 7117745 | 2022-12-15 05:23:02 | 2022-12-15 06:19:32 | 2022-12-15 07:27:03 | 1:07:31 | 0:55:48 | 0:11:43 | smithi | main | centos | 8.stream | orch:cephadm/upgrade/{1-start-distro/1-start-centos_8.stream_container-tools 2-repo_digest/defaut 3-upgrade/staggered 4-wait 5-upgrade-ls agent/on mon_election/classic} | 2 | |
Failure Reason:
Command failed on smithi170 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v16.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid bc9a8c3c-7c42-11ed-8443-001a4aab830c -e sha1=ea64f728a78d3a1b70a28c2c8fc745b87adee975 -- bash -c \'ceph versions | jq -e \'"\'"\'.osd | length == 2\'"\'"\'\'' |
||||||||||||||
pass | 7117747 | 2022-12-15 05:23:08 | 2022-12-15 06:20:37 | 2022-12-15 07:05:26 | 0:44:49 | 0:30:48 | 0:14:01 | smithi | main | ubuntu | 20.04 | orch:cephadm/with-work/{0-distro/ubuntu_20.04 fixed-2 mode/packaged mon_election/classic msgr/async start tasks/rados_python} | 2 | |
pass | 7117749 | 2022-12-15 05:23:23 | 2022-12-15 06:52:08 | 1027 | smithi | main | centos | 8.stream | orch:cephadm/smoke-roleless/{0-distro/centos_8.stream_container_tools 0-nvme-loop 1-start 2-services/rgw 3-final} | 2 | ||||
pass | 7117751 | 2022-12-15 05:23:34 | 2022-12-15 06:25:13 | 2022-12-15 07:07:12 | 0:41:59 | 0:30:38 | 0:11:21 | smithi | main | rhel | 8.6 | orch:cephadm/workunits/{0-distro/rhel_8.6_container_tools_3.0 agent/on mon_election/connectivity task/test_orch_cli_mon} | 5 | |
pass | 7117753 | 2022-12-15 05:23:50 | 2022-12-15 06:25:58 | 2022-12-15 07:02:47 | 0:36:49 | 0:26:39 | 0:10:10 | smithi | main | centos | 8.stream | orch:cephadm/with-work/{0-distro/centos_8.stream_container_tools fixed-2 mode/root mon_election/connectivity msgr/async-v1only start tasks/rotate-keys} | 2 | |
pass | 7117756 | 2022-12-15 05:23:57 | 2022-12-15 06:27:24 | 2022-12-15 06:55:57 | 0:28:33 | 0:15:58 | 0:12:35 | smithi | main | rhel | 8.6 | orch:cephadm/osds/{0-distro/rhel_8.6_container_tools_rhel8 0-nvme-loop 1-start 2-ops/repave-all} | 2 | |
dead | 7117757 | 2022-12-15 05:24:03 | 2022-12-15 18:48:43 | smithi | main | rhel | 8.6 | orch:cephadm/smoke/{0-distro/rhel_8.6_container_tools_3.0 0-nvme-loop agent/on fixed-2 mon_election/classic start} | 2 | |||||
Failure Reason:
Error reimaging machines: HTTPConnectionPool(host='fog.front.sepia.ceph.com', port=80): Max retries exceeded with url: /fog/host (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f945b21d880>: Failed to establish a new connection: [Errno 113] No route to host')) |
||||||||||||||
dead | 7117760 | 2022-12-15 05:24:10 | 2022-12-15 18:48:39 | 2022-12-15 18:48:43 | 0:00:04 | smithi | main | centos | 8.stream | orch:cephadm/smoke-roleless/{0-distro/centos_8.stream_container_tools_crun 0-nvme-loop 1-start 2-services/basic 3-final} | 2 | |||
Failure Reason:
Error reimaging machines: HTTPConnectionPool(host='fog.front.sepia.ceph.com', port=80): Max retries exceeded with url: /fog/host (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fe38153fac0>: Failed to establish a new connection: [Errno 113] No route to host')) |
||||||||||||||
dead | 7117762 | 2022-12-15 05:24:20 | 2022-12-15 18:48:43 | smithi | main | rhel | 8.6 | orch:cephadm/thrash/{0-distro/rhel_8.6_container_tools_3.0 1-start 2-thrash 3-tasks/rados_api_tests fixed-2 msgr/async-v1only root} | 2 | |||||
Failure Reason:
Error reimaging machines: HTTPConnectionPool(host='fog.front.sepia.ceph.com', port=80): Max retries exceeded with url: /fog/host (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f603e814c40>: Failed to establish a new connection: [Errno 113] No route to host')) |
||||||||||||||
dead | 7117763 | 2022-12-15 05:24:37 | 2022-12-15 18:48:40 | 2022-12-15 18:48:43 | 0:00:03 | smithi | main | centos | 8.stream | orch:cephadm/with-work/{0-distro/centos_8.stream_container_tools_crun fixed-2 mode/packaged mon_election/classic msgr/async-v2only start tasks/rados_api_tests} | 2 | |||
Failure Reason:
Error reimaging machines: HTTPConnectionPool(host='fog.front.sepia.ceph.com', port=80): Max retries exceeded with url: /fog/host (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f392b52a2b0>: Failed to establish a new connection: [Errno 113] No route to host')) |
||||||||||||||
dead | 7117766 | 2022-12-15 05:24:42 | 2022-12-15 18:48:40 | 2022-12-15 18:48:43 | 0:00:03 | smithi | main | rhel | 8.6 | orch:cephadm/workunits/{0-distro/rhel_8.6_container_tools_rhel8 agent/off mon_election/classic task/test_adoption} | 1 | |||
Failure Reason:
Error reimaging machines: HTTPConnectionPool(host='fog.front.sepia.ceph.com', port=80): Max retries exceeded with url: /fog/host (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f86bd306cd0>: Failed to establish a new connection: [Errno 113] No route to host')) |
||||||||||||||
dead | 7117768 | 2022-12-15 05:24:58 | 2022-12-15 18:48:40 | 2022-12-15 18:48:43 | 0:00:03 | smithi | main | centos | 8.stream | orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} fail_fs/yes overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn syntax} roles tasks/{0-from/pacific 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-inline/yes 4-verify} 2-client 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} | 2 | |||
Failure Reason:
Error reimaging machines: HTTPConnectionPool(host='fog.front.sepia.ceph.com', port=80): Max retries exceeded with url: /fog/host (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fface295910>: Failed to establish a new connection: [Errno 113] No route to host')) |
||||||||||||||
dead | 7117770 | 2022-12-15 05:25:04 | 2022-12-15 18:48:41 | 2022-12-15 18:48:43 | 0:00:02 | smithi | main | rhel | 8.6 | orch:cephadm/with-work/{0-distro/rhel_8.6_container_tools_3.0 fixed-2 mode/root mon_election/connectivity msgr/async start tasks/rados_python} | 2 | |||
Failure Reason:
Error reimaging machines: HTTPConnectionPool(host='fog.front.sepia.ceph.com', port=80): Max retries exceeded with url: /fog/host (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f75403a1940>: Failed to establish a new connection: [Errno 113] No route to host')) |
||||||||||||||
dead | 7117772 | 2022-12-15 05:25:08 | 2022-12-15 18:48:41 | 2022-12-15 18:48:43 | 0:00:02 | smithi | main | ubuntu | 20.04 | orch:cephadm/workunits/{0-distro/ubuntu_20.04 agent/on mon_election/connectivity task/test_cephadm} | 1 | |||
Failure Reason:
Error reimaging machines: HTTPConnectionPool(host='fog.front.sepia.ceph.com', port=80): Max retries exceeded with url: /fog/host (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f968d309820>: Failed to establish a new connection: [Errno 113] No route to host')) |
||||||||||||||
dead | 7117775 | 2022-12-15 05:25:13 | 2022-12-15 18:48:41 | 2022-12-15 18:48:43 | 0:00:02 | smithi | main | rhel | 8.6 | orch:cephadm/smoke-roleless/{0-distro/rhel_8.6_container_tools_3.0 0-nvme-loop 1-start 2-services/client-keyring 3-final} | 2 | |||
Failure Reason:
Error reimaging machines: HTTPConnectionPool(host='fog.front.sepia.ceph.com', port=80): Max retries exceeded with url: /fog/host (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f68861857c0>: Failed to establish a new connection: [Errno 113] No route to host')) |
||||||||||||||
dead | 7117776 | 2022-12-15 05:25:29 | 2022-12-15 18:48:42 | 2022-12-15 18:48:43 | 0:00:01 | smithi | main | centos | 8.stream | orch:cephadm/mgr-nfs-upgrade/{0-centos_8.stream_container_tools 1-bootstrap/16.2.5 1-start 2-nfs 3-upgrade-with-workload 4-final} | 2 | |||
Failure Reason:
Error reimaging machines: HTTPConnectionPool(host='fog.front.sepia.ceph.com', port=80): Max retries exceeded with url: /fog/host (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f803f466af0>: Failed to establish a new connection: [Errno 113] No route to host')) |
||||||||||||||
dead | 7117778 | 2022-12-15 05:25:35 | 2022-12-15 18:48:42 | 2022-12-15 18:48:46 | 0:00:04 | smithi | main | rhel | 8.6 | orch:cephadm/with-work/{0-distro/rhel_8.6_container_tools_rhel8 fixed-2 mode/packaged mon_election/classic msgr/async-v1only start tasks/rotate-keys} | 2 | |||
Failure Reason:
Error reimaging machines: HTTPConnectionPool(host='fog.front.sepia.ceph.com', port=80): Max retries exceeded with url: /fog/host (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fad88f7a610>: Failed to establish a new connection: [Errno 113] No route to host')) |
||||||||||||||
dead | 7117780 | 2022-12-15 05:25:46 | 2022-12-15 18:48:42 | 2022-12-15 18:48:46 | 0:00:04 | smithi | main | centos | 8.stream | orch:cephadm/workunits/{0-distro/centos_8.stream_container_tools agent/off mon_election/classic task/test_cephadm_repos} | 1 | |||
Failure Reason:
Error reimaging machines: HTTPConnectionPool(host='fog.front.sepia.ceph.com', port=80): Max retries exceeded with url: /fog/host (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fe30235eee0>: Failed to establish a new connection: [Errno 113] No route to host')) |
||||||||||||||
dead | 7117782 | 2022-12-15 05:26:02 | 2022-12-15 18:48:43 | 2022-12-15 18:48:46 | 0:00:03 | smithi | main | rhel | 8.6 | orch:cephadm/smoke-roleless/{0-distro/rhel_8.6_container_tools_rhel8 0-nvme-loop 1-start 2-services/iscsi 3-final} | 2 | |||
Failure Reason:
Error reimaging machines: HTTPConnectionPool(host='fog.front.sepia.ceph.com', port=80): Max retries exceeded with url: /fog/host (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fe46f47fac0>: Failed to establish a new connection: [Errno 113] No route to host')) |
||||||||||||||
dead | 7117784 | 2022-12-15 05:26:14 | 2022-12-15 18:48:43 | 2022-12-15 18:48:46 | 0:00:03 | smithi | main | rhel | 8.6 | orch:cephadm/smoke/{0-distro/rhel_8.6_container_tools_rhel8 0-nvme-loop agent/off fixed-2 mon_election/connectivity start} | 2 | |||
Failure Reason:
Error reimaging machines: HTTPConnectionPool(host='fog.front.sepia.ceph.com', port=80): Max retries exceeded with url: /fog/host (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f0c0a1fba00>: Failed to establish a new connection: [Errno 113] No route to host')) |
||||||||||||||
dead | 7117786 | 2022-12-15 05:26:20 | 2022-12-15 18:48:44 | 2022-12-15 18:48:46 | 0:00:02 | smithi | main | rhel | 8.6 | orch:cephadm/thrash/{0-distro/rhel_8.6_container_tools_rhel8 1-start 2-thrash 3-tasks/radosbench fixed-2 msgr/async-v2only root} | 2 | |||
Failure Reason:
Error reimaging machines: HTTPConnectionPool(host='fog.front.sepia.ceph.com', port=80): Max retries exceeded with url: /fog/host (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fbe5b6e58b0>: Failed to establish a new connection: [Errno 113] No route to host')) |
||||||||||||||
dead | 7117788 | 2022-12-15 05:26:26 | 2022-12-15 18:48:44 | 2022-12-15 18:48:46 | 0:00:02 | smithi | main | ubuntu | 20.04 | orch:cephadm/with-work/{0-distro/ubuntu_20.04 fixed-2 mode/root mon_election/connectivity msgr/async-v2only start tasks/rados_api_tests} | 2 | |||
Failure Reason:
Error reimaging machines: HTTPConnectionPool(host='fog.front.sepia.ceph.com', port=80): Max retries exceeded with url: /fog/host (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fab6b5f6b50>: Failed to establish a new connection: [Errno 113] No route to host')) |
||||||||||||||
dead | 7117790 | 2022-12-15 05:26:28 | 2022-12-15 18:48:44 | 2022-12-15 18:48:46 | 0:00:02 | smithi | main | centos | 8.stream | orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} fail_fs/no overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn syntax} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-inline/no 4-verify} 2-client 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} | 2 | |||
Failure Reason:
Error reimaging machines: HTTPConnectionPool(host='fog.front.sepia.ceph.com', port=80): Max retries exceeded with url: /fog/host (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f96edebb850>: Failed to establish a new connection: [Errno 113] No route to host')) |
||||||||||||||
dead | 7117792 | 2022-12-15 05:26:39 | 2022-12-15 18:48:45 | 2022-12-15 18:48:46 | 0:00:01 | smithi | main | ubuntu | 20.04 | orch:cephadm/upgrade/{1-start-distro/1-start-ubuntu_20.04 2-repo_digest/repo_digest 3-upgrade/simple 4-wait 5-upgrade-ls agent/off mon_election/connectivity} | 2 | |||
Failure Reason:
Error reimaging machines: HTTPConnectionPool(host='fog.front.sepia.ceph.com', port=80): Max retries exceeded with url: /fog/host (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f610f4eca90>: Failed to establish a new connection: [Errno 113] No route to host')) |
||||||||||||||
dead | 7117794 | 2022-12-15 05:26:45 | 2022-12-15 18:48:45 | 2022-12-15 18:48:49 | 0:00:04 | smithi | main | ubuntu | 20.04 | orch:cephadm/osds/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-ops/rm-zap-add} | 2 | |||
Failure Reason:
Error reimaging machines: HTTPConnectionPool(host='fog.front.sepia.ceph.com', port=80): Max retries exceeded with url: /fog/host (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fde5abeb9a0>: Failed to establish a new connection: [Errno 113] No route to host')) |
||||||||||||||
dead | 7117796 | 2022-12-15 05:26:52 | 2022-12-15 18:48:45 | 2022-12-15 18:48:49 | 0:00:04 | smithi | main | centos | 8.stream | orch:cephadm/workunits/{0-distro/centos_8.stream_container_tools_crun agent/on mon_election/connectivity task/test_iscsi_pids_limit/{centos_8.stream_container_tools test_iscsi_pids_limit}} | 1 | |||
Failure Reason:
Error reimaging machines: HTTPConnectionPool(host='fog.front.sepia.ceph.com', port=80): Max retries exceeded with url: /fog/host (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fe7a2feb790>: Failed to establish a new connection: [Errno 113] No route to host')) |
||||||||||||||
dead | 7117798 | 2022-12-15 05:27:04 | 2022-12-15 18:48:46 | 2022-12-15 18:48:49 | 0:00:03 | smithi | main | centos | 8.stream | orch:cephadm/with-work/{0-distro/centos_8.stream_container_tools fixed-2 mode/packaged mon_election/classic msgr/async start tasks/rados_python} | 2 | |||
Failure Reason:
Error reimaging machines: HTTPConnectionPool(host='fog.front.sepia.ceph.com', port=80): Max retries exceeded with url: /fog/host (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f9e4b4ea160>: Failed to establish a new connection: [Errno 113] No route to host')) |
||||||||||||||
dead | 7117800 | 2022-12-15 05:27:15 | 2022-12-15 18:48:46 | 2022-12-15 18:48:49 | 0:00:03 | smithi | main | ubuntu | 20.04 | orch:cephadm/smoke-roleless/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-services/jaeger 3-final} | 2 | |||
Failure Reason:
Error reimaging machines: HTTPConnectionPool(host='fog.front.sepia.ceph.com', port=80): Max retries exceeded with url: /fog/host (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f0a587d6b20>: Failed to establish a new connection: [Errno 113] No route to host')) |
||||||||||||||
dead | 7117803 | 2022-12-15 05:27:29 | 2022-12-15 18:48:47 | 2022-12-15 18:48:49 | 0:00:02 | smithi | main | centos | 8.stream | orch:cephadm/with-work/{0-distro/centos_8.stream_container_tools_crun fixed-2 mode/root mon_election/connectivity msgr/async-v1only start tasks/rotate-keys} | 2 | |||
Failure Reason:
Error reimaging machines: HTTPConnectionPool(host='fog.front.sepia.ceph.com', port=80): Max retries exceeded with url: /fog/host (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fb3ddafc910>: Failed to establish a new connection: [Errno 113] No route to host')) |
||||||||||||||
dead | 7117805 | 2022-12-15 05:27:39 | 2022-12-15 18:48:49 | smithi | main | rhel | 8.6 | orch:cephadm/workunits/{0-distro/rhel_8.6_container_tools_3.0 agent/off mon_election/classic task/test_nfs} | 1 | |||||
Failure Reason:
Error reimaging machines: HTTPConnectionPool(host='fog.front.sepia.ceph.com', port=80): Max retries exceeded with url: /fog/host (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f90e3feddc0>: Failed to establish a new connection: [Errno 113] No route to host')) |
||||||||||||||
dead | 7117808 | 2022-12-15 05:27:41 | 2022-12-15 18:48:47 | 2022-12-15 18:48:49 | 0:00:02 | smithi | main | centos | 8.stream | orch:cephadm/smoke-roleless/{0-distro/centos_8.stream_container_tools 0-nvme-loop 1-start 2-services/mirror 3-final} | 2 | |||
Failure Reason:
Error reimaging machines: HTTPConnectionPool(host='fog.front.sepia.ceph.com', port=80): Max retries exceeded with url: /fog/host (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fd990665a90>: Failed to establish a new connection: [Errno 113] No route to host')) |
||||||||||||||
dead | 7117810 | 2022-12-15 05:27:52 | 2022-12-15 18:48:48 | 2022-12-15 18:48:50 | 0:00:02 | smithi | main | ubuntu | 20.04 | orch:cephadm/smoke/{0-distro/ubuntu_20.04 0-nvme-loop agent/on fixed-2 mon_election/classic start} | 2 | |||
Failure Reason:
Error reimaging machines: HTTPConnectionPool(host='fog.front.sepia.ceph.com', port=80): Max retries exceeded with url: /fog/host (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f90431caca0>: Failed to establish a new connection: [Errno 113] No route to host')) |
||||||||||||||
dead | 7117812 | 2022-12-15 05:27:57 | 2022-12-15 18:48:49 | smithi | main | ubuntu | 20.04 | orch:cephadm/thrash/{0-distro/ubuntu_20.04 1-start 2-thrash 3-tasks/small-objects fixed-2 msgr/async root} | 2 | |||||
Failure Reason:
Error reimaging machines: HTTPConnectionPool(host='fog.front.sepia.ceph.com', port=80): Max retries exceeded with url: /fog/host (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f480a1cea30>: Failed to establish a new connection: [Errno 113] No route to host')) |
||||||||||||||
dead | 7117813 | 2022-12-15 05:28:03 | 2022-12-15 18:48:48 | 2022-12-15 18:48:52 | 0:00:04 | smithi | main | rhel | 8.6 | orch:cephadm/with-work/{0-distro/rhel_8.6_container_tools_3.0 fixed-2 mode/packaged mon_election/classic msgr/async-v2only start tasks/rados_api_tests} | 2 | |||
Failure Reason:
Error reimaging machines: HTTPConnectionPool(host='fog.front.sepia.ceph.com', port=80): Max retries exceeded with url: /fog/host (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fe0d20282b0>: Failed to establish a new connection: [Errno 113] No route to host')) |
||||||||||||||
dead | 7117815 | 2022-12-15 05:28:14 | 2022-12-15 18:48:52 | smithi | main | centos | 8.stream | orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} fail_fs/yes overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn syntax} roles tasks/{0-from/pacific 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-inline/yes 4-verify} 2-client 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} | 2 | |||||
Failure Reason:
Error reimaging machines: HTTPConnectionPool(host='fog.front.sepia.ceph.com', port=80): Max retries exceeded with url: /fog/host (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f629932d970>: Failed to establish a new connection: [Errno 113] No route to host')) |
||||||||||||||
dead | 7117817 | 2022-12-15 05:28:25 | 2022-12-15 18:48:52 | smithi | main | rhel | 8.6 | orch:cephadm/workunits/{0-distro/rhel_8.6_container_tools_rhel8 agent/on mon_election/connectivity task/test_orch_cli} | 1 | |||||
Failure Reason:
Error reimaging machines: HTTPConnectionPool(host='fog.front.sepia.ceph.com', port=80): Max retries exceeded with url: /fog/host (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f22adc2f7c0>: Failed to establish a new connection: [Errno 113] No route to host')) |
||||||||||||||
dead | 7117819 | 2022-12-15 05:28:36 | 2022-12-15 18:48:49 | 2022-12-15 18:48:52 | 0:00:03 | smithi | main | centos | 8.stream | orch:cephadm/smoke-roleless/{0-distro/centos_8.stream_container_tools_crun 0-nvme-loop 1-start 2-services/nfs-ingress-rgw-bucket 3-final} | 2 | |||
Failure Reason:
Error reimaging machines: HTTPConnectionPool(host='fog.front.sepia.ceph.com', port=80): Max retries exceeded with url: /fog/host (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f458b2dbaf0>: Failed to establish a new connection: [Errno 113] No route to host')) |
||||||||||||||
dead | 7117821 | 2022-12-15 05:28:41 | 2022-12-15 18:48:50 | 2022-12-15 18:49:04 | 0:00:14 | smithi | main | rhel | 8.6 | orch:cephadm/with-work/{0-distro/rhel_8.6_container_tools_rhel8 fixed-2 mode/root mon_election/connectivity msgr/async start tasks/rados_python} | 2 | |||
Failure Reason:
Error reimaging machines: HTTPConnectionPool(host='fog.front.sepia.ceph.com', port=80): Max retries exceeded with url: /fog/host (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f121c8ed8b0>: Failed to establish a new connection: [Errno 113] No route to host')) |
||||||||||||||
dead | 7117823 | 2022-12-15 05:28:57 | 2022-12-15 18:49:00 | 2022-12-15 18:49:04 | 0:00:04 | smithi | main | centos | 8.stream | orch:cephadm/osds/{0-distro/centos_8.stream_container_tools 0-nvme-loop 1-start 2-ops/rm-zap-flag} | 2 | |||
Failure Reason:
Error reimaging machines: HTTPConnectionPool(host='fog.front.sepia.ceph.com', port=80): Max retries exceeded with url: /fog/host (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7ff8554d7a60>: Failed to establish a new connection: [Errno 113] No route to host')) |
||||||||||||||
dead | 7117825 | 2022-12-15 05:28:59 | 2022-12-15 18:49:00 | 2022-12-15 18:49:04 | 0:00:04 | smithi | main | ubuntu | 20.04 | orch:cephadm/workunits/{0-distro/ubuntu_20.04 agent/off mon_election/classic task/test_orch_cli_mon} | 5 | |||
Failure Reason:
Error reimaging machines: HTTPConnectionPool(host='fog.front.sepia.ceph.com', port=80): Max retries exceeded with url: /fog/host (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f8cd2b36e20>: Failed to establish a new connection: [Errno 113] No route to host')) |