User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Pass | Fail | Dead |
---|---|---|---|---|---|---|---|---|---|---|---|
gabrioux | 2023-04-25 13:08:27 | 2023-04-27 13:37:03 | 2023-04-28 06:50:01 | 17:12:58 | orch:cephadm | wip-guits-testing-2023-04-25-0823 | smithi | 0fac42e | 13 | 83 | 8 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
fail | 7251009 | 2023-04-25 13:08:39 | 2023-04-27 13:37:03 | 2023-04-27 14:03:34 | 0:26:31 | smithi | main | centos | 8.stream | orch:cephadm/osds/{0-distro/centos_8.stream_container_tools 0-nvme-loop 1-start 2-ops/rm-zap-add} | 2 | |||
Failure Reason:
Command failed on smithi089 with status 1: 'sudo yum install -y kernel' |
||||||||||||||
fail | 7251010 | 2023-04-25 13:08:40 | 2023-04-27 13:38:34 | 2023-04-27 14:11:18 | 0:32:44 | 0:24:22 | 0:08:22 | smithi | main | rhel | 8.6 | orch:cephadm/with-work/{0-distro/rhel_8.6_container_tools_3.0 fixed-2 mode/packaged mon_election/classic msgr/async-v1only start tasks/rados_python} | 2 | |
Failure Reason:
Command failed on smithi033 with status 1: 'sudo cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:0fac42ee00164c4a9a3aab987f72c8d170d5bcb7 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid c6db1dda-e504-11ed-9b00-001a4aab830c -- ceph-volume lvm zap /dev/vg_nvme/lv_4' |
||||||||||||||
fail | 7251011 | 2023-04-25 13:08:41 | 2023-04-27 13:40:04 | 2023-04-27 14:57:45 | 1:17:41 | 0:47:27 | 0:30:14 | smithi | main | centos | 8.stream | orch:cephadm/smoke-roleless/{0-distro/centos_8.stream_container_tools_crun 0-nvme-loop 1-start 2-services/jaeger 3-final} | 2 | |
Failure Reason:
timeout expired in wait_for_all_osds_up |
||||||||||||||
dead | 7251012 | 2023-04-25 13:08:42 | 2023-04-27 13:40:45 | 2023-04-28 01:50:10 | 12:09:25 | smithi | main | centos | 8.stream | orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} fail_fs/no overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn syntax} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-inline/no 4-verify} 2-client 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
dead | 7251013 | 2023-04-25 13:08:43 | 2023-04-27 13:41:25 | 2023-04-28 01:51:56 | 12:10:31 | smithi | main | centos | 8.stream | orch:cephadm/mgr-nfs-upgrade/{0-centos_8.stream_container_tools 1-bootstrap/16.2.0 1-start 2-nfs 3-upgrade-with-workload 4-final} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 7251014 | 2023-04-25 13:08:44 | 2023-04-27 13:42:46 | 2023-04-27 14:11:04 | 0:28:18 | 0:20:19 | 0:07:59 | smithi | main | rhel | 8.6 | orch:cephadm/orchestrator_cli/{0-random-distro$/{rhel_8.6_container_tools_3.0} 2-node-mgr agent/off orchestrator_cli} | 2 | |
dead | 7251015 | 2023-04-25 13:08:45 | 2023-04-27 13:43:16 | 2023-04-27 14:49:26 | 1:06:10 | 0:34:11 | 0:31:59 | smithi | main | centos | 8.stream | orch:cephadm/rbd_iscsi/{0-single-container-host base/install cluster/{fixed-3 openstack} workloads/cephadm_iscsi} | 3 | |
Failure Reason:
{'smithi148.front.sepia.ceph.com': {'_ansible_no_log': False, 'changed': False, 'invocation': {'module_args': {'allow_downgrade': False, 'allowerasing': False, 'autoremove': False, 'bugfix': False, 'conf_file': None, 'disable_excludes': None, 'disable_gpg_check': False, 'disable_plugin': [], 'disablerepo': [], 'download_dir': None, 'download_only': False, 'enable_plugin': [], 'enablerepo': [], 'exclude': [], 'install_repoquery': True, 'install_weak_deps': True, 'installroot': '/', 'list': None, 'lock_timeout': 30, 'name': ['krb5-workstation'], 'releasever': None, 'security': False, 'skip_broken': False, 'state': 'present', 'update_cache': False, 'update_only': False, 'validate_certs': True}}, 'msg': "Failed to download metadata for repo 'CentOS-PowerTools': Yum repo downloading error: Downloading error(s): repodata/144bb6d03f4cceafda5d5248f92ece7a8539e8858d490dfec5ebeb61d487bb20-filelists.xml.gz - Cannot download, all mirrors were already tried without success", 'rc': 1, 'results': []}} |
||||||||||||||
fail | 7251016 | 2023-04-25 13:08:46 | 2023-04-27 13:43:37 | 2023-04-27 14:03:34 | 0:19:57 | 0:11:32 | 0:08:25 | smithi | main | rhel | 8.6 | orch:cephadm/smoke-singlehost/{0-random-distro$/{rhel_8.6_container_tools_3.0} 1-start 2-services/basic 3-final} | 1 | |
Failure Reason:
Command failed on smithi150 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:0fac42ee00164c4a9a3aab987f72c8d170d5bcb7 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d6363aea-e503-11ed-9b00-001a4aab830c -- ceph-volume lvm zap /dev/vg_nvme/lv_4' |
||||||||||||||
fail | 7251017 | 2023-04-25 13:08:47 | 2023-04-27 13:46:07 | 2023-04-27 14:57:41 | 1:11:34 | 0:39:57 | 0:31:37 | smithi | main | centos | 8.stream | orch:cephadm/smoke-small/{0-distro/centos_8.stream_container_tools_crun 0-nvme-loop agent/off fixed-2 mon_election/classic start} | 3 | |
Failure Reason:
Command failed on smithi003 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:0fac42ee00164c4a9a3aab987f72c8d170d5bcb7 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 5538905c-e50b-11ed-9b00-001a4aab830c -- ceph-volume lvm zap /dev/nvme4n1' |
||||||||||||||
fail | 7251018 | 2023-04-25 13:08:48 | 2023-04-27 13:48:08 | 2023-04-27 15:45:28 | 1:57:20 | 1:24:09 | 0:33:11 | smithi | main | centos | 8.stream | orch:cephadm/upgrade/{1-start-distro/1-start-centos_8.stream_container-tools 2-repo_digest/defaut 3-upgrade/staggered 4-wait 5-upgrade-ls agent/off mon_election/classic} | 2 | |
Failure Reason:
Command failed on smithi086 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v16.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid e451a47c-e50b-11ed-9b00-001a4aab830c -e sha1=0fac42ee00164c4a9a3aab987f72c8d170d5bcb7 -- bash -c \'ceph versions | jq -e \'"\'"\'.osd | length == 2\'"\'"\'\'' |
||||||||||||||
fail | 7251019 | 2023-04-25 13:08:49 | 2023-04-27 13:52:09 | 2023-04-27 15:48:03 | 1:55:54 | 1:25:00 | 0:30:54 | smithi | main | centos | 8.stream | orch:cephadm/workunits/{0-distro/centos_8.stream_container_tools agent/off mon_election/classic task/test_nfs} | 1 | |
Failure Reason:
Command failed on smithi141 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:0fac42ee00164c4a9a3aab987f72c8d170d5bcb7 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 606e56c6-e512-11ed-9b00-001a4aab830c -- ceph-volume lvm zap /dev/vg_nvme/lv_4' |
||||||||||||||
fail | 7251020 | 2023-04-25 13:08:50 | 2023-04-27 13:52:59 | 2023-04-27 14:21:03 | 0:28:04 | 0:20:12 | 0:07:52 | smithi | main | rhel | 8.6 | orch:cephadm/with-work/{0-distro/rhel_8.6_container_tools_rhel8 fixed-2 mode/root mon_election/connectivity msgr/async-v2only start tasks/rotate-keys} | 2 | |
Failure Reason:
Command failed on smithi044 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:0fac42ee00164c4a9a3aab987f72c8d170d5bcb7 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 1794c04a-e506-11ed-9b00-001a4aab830c -- ceph-volume lvm zap /dev/vg_nvme/lv_4' |
||||||||||||||
fail | 7251021 | 2023-04-25 13:08:51 | 2023-04-27 13:53:40 | 2023-04-27 14:23:36 | 0:29:56 | 0:21:03 | 0:08:53 | smithi | main | rhel | 8.6 | orch:cephadm/smoke-roleless/{0-distro/rhel_8.6_container_tools_3.0 0-nvme-loop 1-start 2-services/mirror 3-final} | 2 | |
Failure Reason:
timeout expired in wait_for_all_osds_up |
||||||||||||||
fail | 7251022 | 2023-04-25 13:08:52 | 2023-04-27 13:56:01 | 2023-04-27 15:57:10 | 2:01:09 | 1:29:31 | 0:31:38 | smithi | main | centos | 8.stream | orch:cephadm/workunits/{0-distro/centos_8.stream_container_tools_crun agent/on mon_election/connectivity task/test_orch_cli} | 1 | |
Failure Reason:
Command failed on smithi023 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:0fac42ee00164c4a9a3aab987f72c8d170d5bcb7 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid cc2650fc-e513-11ed-9b00-001a4aab830c -- ceph-volume lvm zap /dev/vg_nvme/lv_4' |
||||||||||||||
fail | 7251023 | 2023-04-25 13:08:53 | 2023-04-27 13:57:01 | 2023-04-27 14:20:16 | 0:23:15 | 0:14:19 | 0:08:56 | smithi | main | rhel | 8.6 | orch:cephadm/smoke/{0-distro/rhel_8.6_container_tools_rhel8 0-nvme-loop agent/off fixed-2 mon_election/classic start} | 2 | |
Failure Reason:
Command failed on smithi077 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:0fac42ee00164c4a9a3aab987f72c8d170d5bcb7 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 0f531d64-e506-11ed-9b00-001a4aab830c -- ceph-volume lvm zap /dev/nvme4n1' |
||||||||||||||
fail | 7251024 | 2023-04-25 13:08:54 | 2023-04-27 13:59:02 | 2023-04-27 14:24:38 | 0:25:36 | 0:15:32 | 0:10:04 | smithi | main | rhel | 8.6 | orch:cephadm/thrash/{0-distro/rhel_8.6_container_tools_rhel8 1-start 2-thrash 3-tasks/rados_api_tests fixed-2 msgr/async root} | 2 | |
Failure Reason:
Command failed on smithi031 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:0fac42ee00164c4a9a3aab987f72c8d170d5bcb7 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid ca67bce0-e506-11ed-9b00-001a4aab830c -- ceph-volume lvm zap /dev/vg_nvme/lv_4' |
||||||||||||||
fail | 7251025 | 2023-04-25 13:08:55 | 2023-04-27 14:03:43 | 2023-04-27 14:30:46 | 0:27:03 | 0:13:45 | 0:13:18 | smithi | main | ubuntu | 20.04 | orch:cephadm/with-work/{0-distro/ubuntu_20.04 fixed-2 mode/packaged mon_election/classic msgr/async start tasks/rados_api_tests} | 2 | |
Failure Reason:
Command failed on smithi159 with status 1: 'sudo cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:0fac42ee00164c4a9a3aab987f72c8d170d5bcb7 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 7daefe8a-e507-11ed-9b00-001a4aab830c -- ceph-volume lvm zap /dev/vg_nvme/lv_4' |
||||||||||||||
fail | 7251026 | 2023-04-25 13:08:56 | 2023-04-27 14:04:33 | 2023-04-27 14:43:17 | 0:38:44 | 0:26:12 | 0:12:32 | smithi | main | rhel | 8.6 | orch:cephadm/workunits/{0-distro/rhel_8.6_container_tools_3.0 agent/off mon_election/classic task/test_orch_cli_mon} | 5 | |
Failure Reason:
Command failed on smithi039 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:0fac42ee00164c4a9a3aab987f72c8d170d5bcb7 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 0abac600-e509-11ed-9b00-001a4aab830c -- ceph-volume lvm zap /dev/vg_nvme/lv_4' |
||||||||||||||
fail | 7251027 | 2023-04-25 13:08:57 | 2023-04-27 14:09:35 | 2023-04-27 14:36:53 | 0:27:18 | 0:19:07 | 0:08:11 | smithi | main | rhel | 8.6 | orch:cephadm/smoke-roleless/{0-distro/rhel_8.6_container_tools_rhel8 0-nvme-loop 1-start 2-services/nfs-ingress-rgw-bucket 3-final} | 2 | |
Failure Reason:
timeout expired in wait_for_all_osds_up |
||||||||||||||
dead | 7251028 | 2023-04-25 13:08:58 | 2023-04-27 14:11:05 | 2023-04-27 15:14:23 | 1:03:18 | 0:34:06 | 0:29:12 | smithi | main | centos | 8.stream | orch:cephadm/with-work/{0-distro/centos_8.stream_container_tools fixed-2 mode/root mon_election/connectivity msgr/async-v1only start tasks/rados_python} | 2 | |
Failure Reason:
{'smithi033.front.sepia.ceph.com': {'_ansible_no_log': False, 'changed': False, 'invocation': {'module_args': {'allow_downgrade': False, 'allowerasing': False, 'autoremove': False, 'bugfix': False, 'conf_file': None, 'disable_excludes': None, 'disable_gpg_check': False, 'disable_plugin': [], 'disablerepo': [], 'download_dir': None, 'download_only': False, 'enable_plugin': [], 'enablerepo': [], 'exclude': [], 'install_repoquery': True, 'install_weak_deps': True, 'installroot': '/', 'list': None, 'lock_timeout': 30, 'name': ['krb5-workstation'], 'releasever': None, 'security': False, 'skip_broken': False, 'state': 'present', 'update_cache': False, 'update_only': False, 'validate_certs': True}}, 'msg': "Failed to download metadata for repo 'CentOS-PowerTools': Yum repo downloading error: Downloading error(s): repodata/38c2c78c3f89e4d347f07263805d33d8ae17df60ebab2e8a259218c57bcae2fb-comps-PowerTools.x86_64.xml - Cannot download, all mirrors were already tried without success", 'rc': 1, 'results': []}} |
||||||||||||||
fail | 7251029 | 2023-04-25 13:08:59 | 2023-04-27 14:11:26 | 2023-04-27 15:28:59 | 1:17:33 | 0:47:57 | 0:29:36 | smithi | main | centos | 8.stream | orch:cephadm/osds/{0-distro/centos_8.stream_container_tools_crun 0-nvme-loop 1-start 2-ops/rm-zap-flag} | 2 | |
Failure Reason:
timeout expired in wait_for_all_osds_up |
||||||||||||||
pass | 7251030 | 2023-04-25 13:09:00 | 2023-04-27 14:11:46 | 2023-04-27 16:27:04 | 2:15:18 | 1:45:12 | 0:30:06 | smithi | main | centos | 8.stream | orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} fail_fs/yes overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn syntax} roles tasks/{0-from/pacific 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-inline/yes 4-verify} 2-client 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} | 2 | |
fail | 7251031 | 2023-04-25 13:09:01 | 2023-04-27 14:13:37 | 2023-04-27 14:46:32 | 0:32:55 | 0:24:09 | 0:08:46 | smithi | main | rhel | 8.6 | orch:cephadm/workunits/{0-distro/rhel_8.6_container_tools_rhel8 agent/on mon_election/connectivity task/test_rgw_multisite} | 3 | |
Failure Reason:
Command failed on smithi097 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:0fac42ee00164c4a9a3aab987f72c8d170d5bcb7 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid a427d404-e509-11ed-9b00-001a4aab830c -- ceph-volume lvm zap /dev/vg_nvme/lv_4' |
||||||||||||||
fail | 7251032 | 2023-04-25 13:09:02 | 2023-04-27 14:15:17 | 2023-04-27 16:19:20 | 2:04:03 | 1:32:27 | 0:31:36 | smithi | main | centos | 8.stream | orch:cephadm/with-work/{0-distro/centos_8.stream_container_tools_crun fixed-2 mode/packaged mon_election/classic msgr/async-v2only start tasks/rotate-keys} | 2 | |
Failure Reason:
Command failed on smithi006 with status 1: 'sudo cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:0fac42ee00164c4a9a3aab987f72c8d170d5bcb7 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid e30f1e5e-e516-11ed-9b00-001a4aab830c -- ceph-volume lvm zap /dev/vg_nvme/lv_4' |
||||||||||||||
fail | 7251033 | 2023-04-25 13:09:03 | 2023-04-27 14:18:08 | 2023-04-27 14:53:18 | 0:35:10 | 0:20:40 | 0:14:30 | smithi | main | ubuntu | 20.04 | orch:cephadm/smoke-roleless/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-services/nfs-ingress-rgw-user 3-final} | 2 | |
Failure Reason:
timeout expired in wait_for_all_osds_up |
||||||||||||||
fail | 7251034 | 2023-04-25 13:09:04 | 2023-04-27 14:19:19 | 2023-04-27 14:44:15 | 0:24:56 | 0:12:24 | 0:12:32 | smithi | main | ubuntu | 20.04 | orch:cephadm/smoke/{0-distro/ubuntu_20.04 0-nvme-loop agent/on fixed-2 mon_election/connectivity start} | 2 | |
Failure Reason:
Command failed on smithi077 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:0fac42ee00164c4a9a3aab987f72c8d170d5bcb7 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 5f2f9dbe-e509-11ed-9b00-001a4aab830c -- ceph-volume lvm zap /dev/nvme4n1' |
||||||||||||||
fail | 7251035 | 2023-04-25 13:09:05 | 2023-04-27 14:20:19 | 2023-04-27 14:47:43 | 0:27:24 | 0:14:31 | 0:12:53 | smithi | main | ubuntu | 20.04 | orch:cephadm/thrash/{0-distro/ubuntu_20.04 1-start 2-thrash 3-tasks/radosbench fixed-2 msgr/async-v1only root} | 2 | |
Failure Reason:
Command failed on smithi038 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:0fac42ee00164c4a9a3aab987f72c8d170d5bcb7 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid bc696b04-e509-11ed-9b00-001a4aab830c -- ceph-volume lvm zap /dev/vg_nvme/lv_4' |
||||||||||||||
fail | 7251036 | 2023-04-25 13:09:06 | 2023-04-27 14:21:10 | 2023-04-27 14:47:55 | 0:26:45 | 0:20:37 | 0:06:08 | smithi | main | rhel | 8.6 | orch:cephadm/with-work/{0-distro/rhel_8.6_container_tools_3.0 fixed-2 mode/root mon_election/connectivity msgr/async start tasks/rados_api_tests} | 2 | |
Failure Reason:
Command failed on smithi044 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:0fac42ee00164c4a9a3aab987f72c8d170d5bcb7 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid e9907df2-e509-11ed-9b00-001a4aab830c -- ceph-volume lvm zap /dev/vg_nvme/lv_4' |
||||||||||||||
pass | 7251037 | 2023-04-25 13:09:07 | 2023-04-27 14:21:10 | 2023-04-27 14:42:58 | 0:21:48 | 0:10:42 | 0:11:06 | smithi | main | ubuntu | 20.04 | orch:cephadm/workunits/{0-distro/ubuntu_20.04 agent/off mon_election/classic task/test_adoption} | 1 | |
fail | 7251038 | 2023-04-25 13:09:08 | 2023-04-27 14:21:10 | 2023-04-27 15:39:12 | 1:18:02 | 0:47:36 | 0:30:26 | smithi | main | centos | 8.stream | orch:cephadm/smoke-roleless/{0-distro/centos_8.stream_container_tools 0-nvme-loop 1-start 2-services/nfs-ingress 3-final} | 2 | |
Failure Reason:
timeout expired in wait_for_all_osds_up |
||||||||||||||
fail | 7251039 | 2023-04-25 13:09:09 | 2023-04-27 14:21:51 | 2023-04-27 14:47:20 | 0:25:29 | 0:16:59 | 0:08:30 | smithi | main | rhel | 8.6 | orch:cephadm/with-work/{0-distro/rhel_8.6_container_tools_rhel8 fixed-2 mode/packaged mon_election/classic msgr/async-v1only start tasks/rados_python} | 2 | |
Failure Reason:
Command failed on smithi155 with status 127: 'sudo cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:0fac42ee00164c4a9a3aab987f72c8d170d5bcb7 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid fea46e6a-e509-11ed-9b00-001a4aab830c -- ceph mon dump -f json' |
||||||||||||||
dead | 7251040 | 2023-04-25 13:09:10 | 2023-04-27 14:23:41 | 2023-04-28 02:32:54 | 12:09:13 | smithi | main | centos | 8.stream | orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} fail_fs/no overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn syntax} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-inline/no 4-verify} 2-client 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 7251041 | 2023-04-25 13:09:11 | 2023-04-27 14:24:12 | 2023-04-27 15:37:26 | 1:13:14 | 0:39:25 | 0:33:49 | smithi | main | centos | 8.stream | orch:cephadm/smoke-small/{0-distro/centos_8.stream_container_tools_crun 0-nvme-loop agent/on fixed-2 mon_election/connectivity start} | 3 | |
Failure Reason:
Command failed on smithi089 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:0fac42ee00164c4a9a3aab987f72c8d170d5bcb7 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d08656ea-e510-11ed-9b00-001a4aab830c -- ceph-volume lvm zap /dev/nvme4n1' |
||||||||||||||
pass | 7251042 | 2023-04-25 13:09:12 | 2023-04-27 14:27:53 | 2023-04-27 15:14:22 | 0:46:29 | 0:36:14 | 0:10:15 | smithi | main | ubuntu | 20.04 | orch:cephadm/upgrade/{1-start-distro/1-start-ubuntu_20.04 2-repo_digest/repo_digest 3-upgrade/simple 4-wait 5-upgrade-ls agent/on mon_election/connectivity} | 2 | |
fail | 7251043 | 2023-04-25 13:09:13 | 2023-04-27 14:27:53 | 2023-04-27 16:38:07 | 2:10:14 | 1:36:56 | 0:33:18 | smithi | main | centos | 8.stream | orch:cephadm/workunits/{0-distro/centos_8.stream_container_tools agent/on mon_election/connectivity task/test_cephadm} | 1 | |
Failure Reason:
SELinux denials found on ubuntu@smithi188.front.sepia.ceph.com: ['type=AVC msg=audit(1682613278.526:19263): avc: denied { ioctl } for pid=152446 comm="iptables" path="/var/lib/containers/storage/overlay/256a4cc4e3ccd227ae4bb185509f6e16815ef9e002a6e08621102747e387d673/merged" dev="overlay" ino=3411199 scontext=system_u:system_r:iptables_t:s0 tcontext=system_u:object_r:container_file_t:s0:c1022,c1023 tclass=dir permissive=1'] |
||||||||||||||
fail | 7251044 | 2023-04-25 13:09:14 | 2023-04-27 14:30:54 | 2023-04-27 14:58:09 | 0:27:15 | 0:20:37 | 0:06:38 | smithi | main | rhel | 8.6 | orch:cephadm/osds/{0-distro/rhel_8.6_container_tools_3.0 0-nvme-loop 1-start 2-ops/rm-zap-wait} | 2 | |
Failure Reason:
timeout expired in wait_for_all_osds_up |
||||||||||||||
fail | 7251045 | 2023-04-25 13:09:15 | 2023-04-27 14:30:54 | 2023-04-27 15:00:29 | 0:29:35 | 0:13:33 | 0:16:02 | smithi | main | ubuntu | 20.04 | orch:cephadm/with-work/{0-distro/ubuntu_20.04 fixed-2 mode/root mon_election/connectivity msgr/async-v2only start tasks/rotate-keys} | 2 | |
Failure Reason:
Command failed on smithi053 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:0fac42ee00164c4a9a3aab987f72c8d170d5bcb7 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 9338a748-e50b-11ed-9b00-001a4aab830c -- ceph-volume lvm zap /dev/vg_nvme/lv_4' |
||||||||||||||
fail | 7251046 | 2023-04-25 13:09:16 | 2023-04-27 14:36:25 | 2023-04-27 15:53:47 | 1:17:22 | 0:47:25 | 0:29:57 | smithi | main | centos | 8.stream | orch:cephadm/smoke-roleless/{0-distro/centos_8.stream_container_tools_crun 0-nvme-loop 1-start 2-services/nfs-ingress2 3-final} | 2 | |
Failure Reason:
timeout expired in wait_for_all_osds_up |
||||||||||||||
pass | 7251047 | 2023-04-25 13:09:17 | 2023-04-27 14:36:26 | 2023-04-27 15:44:43 | 1:08:17 | 0:38:08 | 0:30:09 | smithi | main | centos | 8.stream | orch:cephadm/workunits/{0-distro/centos_8.stream_container_tools_crun agent/off mon_election/classic task/test_cephadm_repos} | 1 | |
dead | 7251048 | 2023-04-25 13:09:18 | 2023-04-27 14:36:56 | 2023-04-28 02:47:48 | 12:10:52 | smithi | main | centos | 8.stream | orch:cephadm/mgr-nfs-upgrade/{0-centos_8.stream_container_tools 1-bootstrap/16.2.4 1-start 2-nfs 3-upgrade-with-workload 4-final} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 7251049 | 2023-04-25 13:09:19 | 2023-04-27 14:37:47 | 2023-04-27 15:47:22 | 1:09:35 | 0:39:17 | 0:30:18 | smithi | main | centos | 8.stream | orch:cephadm/smoke/{0-distro/centos_8.stream_container_tools 0-nvme-loop agent/on fixed-2 mon_election/classic start} | 2 | |
Failure Reason:
Command failed on smithi012 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:0fac42ee00164c4a9a3aab987f72c8d170d5bcb7 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 36199dc2-e512-11ed-9b00-001a4aab830c -- ceph-volume lvm zap /dev/nvme4n1' |
||||||||||||||
fail | 7251050 | 2023-04-25 13:09:20 | 2023-04-27 14:37:47 | 2023-04-27 16:38:39 | 2:00:52 | 1:29:43 | 0:31:09 | smithi | main | centos | 8.stream | orch:cephadm/thrash/{0-distro/centos_8.stream_container_tools 1-start 2-thrash 3-tasks/small-objects fixed-2 msgr/async-v2only root} | 2 | |
Failure Reason:
Command failed on smithi088 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:0fac42ee00164c4a9a3aab987f72c8d170d5bcb7 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 916d41a4-e519-11ed-9b00-001a4aab830c -- ceph-volume lvm zap /dev/vg_nvme/lv_4' |
||||||||||||||
fail | 7251051 | 2023-04-25 13:09:21 | 2023-04-27 14:39:08 | 2023-04-27 16:40:41 | 2:01:33 | 1:30:04 | 0:31:29 | smithi | main | centos | 8.stream | orch:cephadm/with-work/{0-distro/centos_8.stream_container_tools fixed-2 mode/packaged mon_election/classic msgr/async-v2only start tasks/rados_api_tests} | 2 | |
Failure Reason:
Command failed on smithi026 with status 1: 'sudo cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:0fac42ee00164c4a9a3aab987f72c8d170d5bcb7 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d14ff03c-e519-11ed-9b00-001a4aab830c -- ceph-volume lvm zap /dev/vg_nvme/lv_4' |
||||||||||||||
fail | 7251052 | 2023-04-25 13:09:22 | 2023-04-27 17:04:44 | 2023-04-27 19:05:57 | 2:01:13 | 1:31:10 | 0:30:03 | smithi | main | centos | 8.stream | orch:cephadm/workunits/{0-distro/rhel_8.6_container_tools_3.0 agent/on mon_election/connectivity task/test_iscsi_container/{centos_8.stream_container_tools test_iscsi_container}} | 1 | |
Failure Reason:
Command failed on smithi187 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:0fac42ee00164c4a9a3aab987f72c8d170d5bcb7 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 06647dba-e52e-11ed-9b00-001a4aab830c -- ceph-volume lvm zap /dev/vg_nvme/lv_4' |
||||||||||||||
fail | 7251053 | 2023-04-25 13:09:23 | 2023-04-27 17:04:44 | 2023-04-27 17:40:21 | 0:35:37 | 0:27:39 | 0:07:58 | smithi | main | rhel | 8.6 | orch:cephadm/smoke-roleless/{0-distro/rhel_8.6_container_tools_3.0 0-nvme-loop 1-start 2-services/nfs 3-final} | 2 | |
Failure Reason:
timeout expired in wait_for_all_osds_up |
||||||||||||||
pass | 7251054 | 2023-04-25 13:09:24 | 2023-04-27 17:05:15 | 2023-04-27 19:23:04 | 2:17:49 | 1:48:17 | 0:29:32 | smithi | main | centos | 8.stream | orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} fail_fs/yes overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn syntax} roles tasks/{0-from/pacific 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-inline/yes 4-verify} 2-client 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} | 2 | |
fail | 7251055 | 2023-04-25 13:09:25 | 2023-04-27 17:05:45 | 2023-04-27 19:12:19 | 2:06:34 | 1:32:14 | 0:34:20 | smithi | main | centos | 8.stream | orch:cephadm/with-work/{0-distro/centos_8.stream_container_tools_crun fixed-2 mode/root mon_election/connectivity msgr/async start tasks/rados_python} | 2 | |
Failure Reason:
Command failed on smithi073 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:0fac42ee00164c4a9a3aab987f72c8d170d5bcb7 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid fb0b9858-e52e-11ed-9b00-001a4aab830c -- ceph-volume lvm zap /dev/vg_nvme/lv_4' |
||||||||||||||
fail | 7251056 | 2023-04-25 13:09:26 | 2023-04-27 17:10:56 | 2023-04-27 17:34:31 | 0:23:35 | 0:14:25 | 0:09:10 | smithi | main | rhel | 8.6 | orch:cephadm/workunits/{0-distro/rhel_8.6_container_tools_rhel8 agent/off mon_election/classic task/test_nfs} | 1 | |
Failure Reason:
Command failed on smithi163 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:0fac42ee00164c4a9a3aab987f72c8d170d5bcb7 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 4bc9a8c4-e521-11ed-9b00-001a4aab830c -- ceph-volume lvm zap /dev/vg_nvme/lv_4' |
||||||||||||||
fail | 7251057 | 2023-04-25 13:09:27 | 2023-04-27 17:12:37 | 2023-04-27 17:40:03 | 0:27:26 | 0:19:51 | 0:07:35 | smithi | main | rhel | 8.6 | orch:cephadm/smoke-roleless/{0-distro/rhel_8.6_container_tools_rhel8 0-nvme-loop 1-start 2-services/nfs2 3-final} | 2 | |
Failure Reason:
timeout expired in wait_for_all_osds_up |
||||||||||||||
fail | 7251058 | 2023-04-25 13:09:28 | 2023-04-27 17:12:37 | 2023-04-27 17:43:36 | 0:30:59 | 0:19:48 | 0:11:11 | smithi | main | rhel | 8.6 | orch:cephadm/with-work/{0-distro/rhel_8.6_container_tools_3.0 fixed-2 mode/packaged mon_election/classic msgr/async-v1only start tasks/rotate-keys} | 2 | |
Failure Reason:
Command failed on smithi105 with status 1: 'sudo cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:0fac42ee00164c4a9a3aab987f72c8d170d5bcb7 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 5dfa9e08-e522-11ed-9b00-001a4aab830c -- ceph-volume lvm zap /dev/vg_nvme/lv_4' |
||||||||||||||
fail | 7251059 | 2023-04-25 13:09:29 | 2023-04-27 17:16:28 | 2023-04-27 17:44:58 | 0:28:30 | 0:19:24 | 0:09:06 | smithi | main | rhel | 8.6 | orch:cephadm/osds/{0-distro/rhel_8.6_container_tools_rhel8 0-nvme-loop 1-start 2-ops/rmdir-reactivate} | 2 | |
Failure Reason:
timeout expired in wait_for_all_osds_up |
||||||||||||||
fail | 7251060 | 2023-04-25 13:09:30 | 2023-04-27 17:19:29 | 2023-04-27 17:42:52 | 0:23:23 | 0:11:29 | 0:11:54 | smithi | main | ubuntu | 20.04 | orch:cephadm/workunits/{0-distro/ubuntu_20.04 agent/on mon_election/connectivity task/test_orch_cli} | 1 | |
Failure Reason:
Command failed on smithi038 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:0fac42ee00164c4a9a3aab987f72c8d170d5bcb7 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 4a6d520e-e522-11ed-9b00-001a4aab830c -- ceph-volume lvm zap /dev/vg_nvme/lv_4' |
||||||||||||||
fail | 7251061 | 2023-04-25 13:09:31 | 2023-04-27 17:21:10 | 2023-04-27 18:35:45 | 1:14:35 | 0:41:36 | 0:32:59 | smithi | main | centos | 8.stream | orch:cephadm/smoke/{0-distro/centos_8.stream_container_tools_crun 0-nvme-loop agent/off fixed-2 mon_election/connectivity start} | 2 | |
Failure Reason:
Command failed on smithi044 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:0fac42ee00164c4a9a3aab987f72c8d170d5bcb7 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d45a6e1e-e529-11ed-9b00-001a4aab830c -- ceph-volume lvm zap /dev/nvme4n1' |
||||||||||||||
fail | 7251062 | 2023-04-25 13:09:32 | 2023-04-27 17:24:00 | 2023-04-27 19:29:11 | 2:05:11 | 1:30:24 | 0:34:47 | smithi | main | centos | 8.stream | orch:cephadm/thrash/{0-distro/centos_8.stream_container_tools_crun 1-start 2-thrash 3-tasks/snaps-few-objects fixed-2 msgr/async root} | 2 | |
Failure Reason:
Command failed on smithi084 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:0fac42ee00164c4a9a3aab987f72c8d170d5bcb7 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 55d13c46-e531-11ed-9b00-001a4aab830c -- ceph-volume lvm zap /dev/vg_nvme/lv_4' |
||||||||||||||
fail | 7251063 | 2023-04-25 13:09:33 | 2023-04-27 17:30:12 | 2023-04-27 17:56:30 | 0:26:18 | 0:17:02 | 0:09:16 | smithi | main | rhel | 8.6 | orch:cephadm/with-work/{0-distro/rhel_8.6_container_tools_rhel8 fixed-2 mode/root mon_election/connectivity msgr/async-v2only start tasks/rados_api_tests} | 2 | |
Failure Reason:
Command failed on smithi088 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:0fac42ee00164c4a9a3aab987f72c8d170d5bcb7 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 60ab2814-e524-11ed-9b00-001a4aab830c -- ceph-volume lvm zap /dev/vg_nvme/lv_4' |
||||||||||||||
fail | 7251064 | 2023-04-25 13:09:34 | 2023-04-27 17:32:42 | 2023-04-27 18:07:06 | 0:34:24 | 0:20:34 | 0:13:50 | smithi | main | ubuntu | 20.04 | orch:cephadm/smoke-roleless/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-services/rgw-ingress 3-final} | 2 | |
Failure Reason:
timeout expired in wait_for_all_osds_up |
||||||||||||||
pass | 7251065 | 2023-04-25 13:09:34 | 2023-04-27 17:34:33 | 2023-04-27 19:57:35 | 2:23:02 | 1:48:23 | 0:34:39 | smithi | main | centos | 8.stream | orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} fail_fs/no overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn syntax} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-inline/no 4-verify} 2-client 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} | 2 | |
pass | 7251066 | 2023-04-25 13:09:35 | 2023-04-27 17:40:14 | 2023-04-27 19:47:33 | 2:07:19 | 1:37:34 | 0:29:45 | smithi | main | centos | 8.stream | orch:cephadm/orchestrator_cli/{0-random-distro$/{centos_8.stream_container_tools} 2-node-mgr agent/on orchestrator_cli} | 2 | |
fail | 7251067 | 2023-04-25 13:09:36 | 2023-04-27 17:40:25 | 2023-04-27 18:47:20 | 1:06:55 | 0:38:54 | 0:28:01 | smithi | main | centos | 8.stream | orch:cephadm/smoke-singlehost/{0-random-distro$/{centos_8.stream_container_tools_crun} 1-start 2-services/rgw 3-final} | 1 | |
Failure Reason:
Command failed on smithi086 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:0fac42ee00164c4a9a3aab987f72c8d170d5bcb7 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 8b66fb94-e52b-11ed-9b00-001a4aab830c -- ceph-volume lvm zap /dev/vg_nvme/lv_4' |
||||||||||||||
fail | 7251068 | 2023-04-25 13:09:37 | 2023-04-27 17:40:25 | 2023-04-27 18:52:55 | 1:12:30 | 0:39:35 | 0:32:55 | smithi | main | centos | 8.stream | orch:cephadm/smoke-small/{0-distro/centos_8.stream_container_tools_crun 0-nvme-loop agent/on fixed-2 mon_election/classic start} | 3 | |
Failure Reason:
Command failed on smithi038 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:0fac42ee00164c4a9a3aab987f72c8d170d5bcb7 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 2b2df7c2-e52c-11ed-9b00-001a4aab830c -- ceph-volume lvm zap /dev/nvme4n1' |
||||||||||||||
fail | 7251069 | 2023-04-25 13:09:38 | 2023-04-27 17:43:46 | 2023-04-27 19:37:25 | 1:53:39 | 1:24:13 | 0:29:26 | smithi | main | centos | 8.stream | orch:cephadm/upgrade/{1-start-distro/1-start-centos_8.stream_container-tools 2-repo_digest/defaut 3-upgrade/staggered 4-wait 5-upgrade-ls agent/on mon_election/classic} | 2 | |
Failure Reason:
Command failed on smithi096 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v16.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 50b87cd8-e52c-11ed-9b00-001a4aab830c -e sha1=0fac42ee00164c4a9a3aab987f72c8d170d5bcb7 -- bash -c \'ceph versions | jq -e \'"\'"\'.osd | length == 2\'"\'"\'\'' |
||||||||||||||
fail | 7251070 | 2023-04-25 13:09:39 | 2023-04-27 17:44:16 | 2023-04-27 20:03:18 | 2:19:02 | 1:42:59 | 0:36:03 | smithi | main | centos | 8.stream | orch:cephadm/workunits/{0-distro/centos_8.stream_container_tools agent/off mon_election/classic task/test_orch_cli_mon} | 5 | |
Failure Reason:
Command failed on smithi002 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:0fac42ee00164c4a9a3aab987f72c8d170d5bcb7 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid cf261932-e535-11ed-9b00-001a4aab830c -- ceph-volume lvm zap /dev/vg_nvme/lv_4' |
||||||||||||||
fail | 7251071 | 2023-04-25 13:09:40 | 2023-04-27 17:48:57 | 2023-04-27 18:13:01 | 0:24:04 | 0:11:58 | 0:12:06 | smithi | main | ubuntu | 20.04 | orch:cephadm/with-work/{0-distro/ubuntu_20.04 fixed-2 mode/packaged mon_election/classic msgr/async start tasks/rados_python} | 2 | |
Failure Reason:
Command failed on smithi162 with status 1: 'sudo cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:0fac42ee00164c4a9a3aab987f72c8d170d5bcb7 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 4a0a3fbc-e526-11ed-9b00-001a4aab830c -- ceph-volume lvm zap /dev/vg_nvme/lv_4' |
||||||||||||||
fail | 7251072 | 2023-04-25 13:09:41 | 2023-04-27 17:48:58 | 2023-04-27 19:06:52 | 1:17:54 | 0:47:59 | 0:29:55 | smithi | main | centos | 8.stream | orch:cephadm/smoke-roleless/{0-distro/centos_8.stream_container_tools 0-nvme-loop 1-start 2-services/rgw 3-final} | 2 | |
Failure Reason:
timeout expired in wait_for_all_osds_up |
||||||||||||||
fail | 7251073 | 2023-04-25 13:09:42 | 2023-04-27 17:49:48 | 2023-04-27 19:57:06 | 2:07:18 | 1:35:10 | 0:32:08 | smithi | main | centos | 8.stream | orch:cephadm/workunits/{0-distro/centos_8.stream_container_tools_crun agent/on mon_election/connectivity task/test_rgw_multisite} | 3 | |
Failure Reason:
Command failed on smithi115 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:0fac42ee00164c4a9a3aab987f72c8d170d5bcb7 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 0ae094d0-e535-11ed-9b00-001a4aab830c -- ceph-volume lvm zap /dev/vg_nvme/lv_4' |
||||||||||||||
fail | 7251074 | 2023-04-25 13:09:43 | 2023-04-27 17:51:29 | 2023-04-27 19:56:12 | 2:04:43 | 1:28:34 | 0:36:09 | smithi | main | centos | 8.stream | orch:cephadm/with-work/{0-distro/centos_8.stream_container_tools fixed-2 mode/root mon_election/connectivity msgr/async-v1only start tasks/rotate-keys} | 2 | |
Failure Reason:
Command failed on smithi088 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:0fac42ee00164c4a9a3aab987f72c8d170d5bcb7 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid fba4d1e8-e534-11ed-9b00-001a4aab830c -- ceph-volume lvm zap /dev/vg_nvme/lv_4' |
||||||||||||||
fail | 7251075 | 2023-04-25 13:09:44 | 2023-04-27 17:56:40 | 2023-04-27 18:26:45 | 0:30:05 | 0:19:42 | 0:10:23 | smithi | main | rhel | 8.6 | orch:cephadm/osds/{0-distro/rhel_8.6_container_tools_rhel8 0-nvme-loop 1-start 2-ops/repave-all} | 2 | |
Failure Reason:
timeout expired in wait_for_all_osds_up |
||||||||||||||
fail | 7251076 | 2023-04-25 13:09:45 | 2023-04-27 17:59:31 | 2023-04-27 18:19:59 | 0:20:28 | 0:11:50 | 0:08:38 | smithi | main | rhel | 8.6 | orch:cephadm/smoke/{0-distro/rhel_8.6_container_tools_3.0 0-nvme-loop agent/on fixed-2 mon_election/classic start} | 2 | |
Failure Reason:
Command failed on smithi039 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:0fac42ee00164c4a9a3aab987f72c8d170d5bcb7 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 718ff4e0-e527-11ed-9b00-001a4aab830c -- ceph-volume lvm zap /dev/nvme4n1' |
||||||||||||||
fail | 7251077 | 2023-04-25 13:09:46 | 2023-04-27 18:00:21 | 2023-04-27 19:28:56 | 1:28:35 | 0:51:49 | 0:36:46 | smithi | main | centos | 8.stream | orch:cephadm/smoke-roleless/{0-distro/centos_8.stream_container_tools_crun 0-nvme-loop 1-start 2-services/basic 3-final} | 2 | |
Failure Reason:
timeout expired in wait_for_all_osds_up |
||||||||||||||
fail | 7251078 | 2023-04-25 13:09:47 | 2023-04-27 18:07:12 | 2023-04-27 18:41:12 | 0:34:00 | 0:24:14 | 0:09:46 | smithi | main | rhel | 8.6 | orch:cephadm/thrash/{0-distro/rhel_8.6_container_tools_3.0 1-start 2-thrash 3-tasks/rados_api_tests fixed-2 msgr/async-v1only root} | 2 | |
Failure Reason:
Command failed on smithi090 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:0fac42ee00164c4a9a3aab987f72c8d170d5bcb7 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 75d18dea-e52a-11ed-9b00-001a4aab830c -- ceph-volume lvm zap /dev/vg_nvme/lv_4' |
||||||||||||||
fail | 7251079 | 2023-04-25 13:09:48 | 2023-04-27 18:09:33 | 2023-04-27 20:17:56 | 2:08:23 | 1:36:21 | 0:32:02 | smithi | main | centos | 8.stream | orch:cephadm/with-work/{0-distro/centos_8.stream_container_tools_crun fixed-2 mode/packaged mon_election/classic msgr/async-v2only start tasks/rados_api_tests} | 2 | |
Failure Reason:
Command failed on smithi033 with status 1: 'sudo cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:0fac42ee00164c4a9a3aab987f72c8d170d5bcb7 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 32be732a-e538-11ed-9b00-001a4aab830c -- ceph-volume lvm zap /dev/vg_nvme/lv_4' |
||||||||||||||
pass | 7251080 | 2023-04-25 13:09:49 | 2023-04-27 18:12:34 | 2023-04-27 18:33:32 | 0:20:58 | 0:15:01 | 0:05:57 | smithi | main | rhel | 8.6 | orch:cephadm/workunits/{0-distro/rhel_8.6_container_tools_3.0 agent/off mon_election/classic task/test_adoption} | 1 | |
pass | 7251081 | 2023-04-25 13:09:50 | 2023-04-27 18:12:34 | 2023-04-27 20:34:26 | 2:21:52 | 1:52:50 | 0:29:02 | smithi | main | centos | 8.stream | orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} fail_fs/yes overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn syntax} roles tasks/{0-from/pacific 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-inline/yes 4-verify} 2-client 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} | 2 | |
fail | 7251082 | 2023-04-25 13:09:51 | 2023-04-27 18:13:05 | 2023-04-27 18:42:49 | 0:29:44 | 0:22:12 | 0:07:32 | smithi | main | rhel | 8.6 | orch:cephadm/with-work/{0-distro/rhel_8.6_container_tools_3.0 fixed-2 mode/root mon_election/connectivity msgr/async start tasks/rados_python} | 2 | |
Failure Reason:
Command failed on smithi114 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:0fac42ee00164c4a9a3aab987f72c8d170d5bcb7 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid b21251ea-e52a-11ed-9b00-001a4aab830c -- ceph-volume lvm zap /dev/vg_nvme/lv_4' |
||||||||||||||
fail | 7251083 | 2023-04-25 13:09:52 | 2023-04-27 18:13:45 | 2023-04-27 18:40:50 | 0:27:05 | 0:20:00 | 0:07:05 | smithi | main | rhel | 8.6 | orch:cephadm/workunits/{0-distro/rhel_8.6_container_tools_rhel8 agent/on mon_election/connectivity task/test_cephadm} | 1 | |
Failure Reason:
SELinux denials found on ubuntu@smithi110.front.sepia.ceph.com: ['type=AVC msg=audit(1682620650.192:20090): avc: denied { ioctl } for pid=109887 comm="iptables" path="/var/lib/containers/storage/overlay/36e176a9b764059bb21bc52c53ddebc4c87b2bb9c14f5aac019caa7afa8b1cb3/merged" dev="overlay" ino=3279057 scontext=system_u:system_r:iptables_t:s0 tcontext=system_u:object_r:container_file_t:s0:c1022,c1023 tclass=dir permissive=1', 'type=AVC msg=audit(1682620650.348:20094): avc: denied { ioctl } for pid=109914 comm="iptables" path="/var/lib/containers/storage/overlay/36e176a9b764059bb21bc52c53ddebc4c87b2bb9c14f5aac019caa7afa8b1cb3/merged" dev="overlay" ino=3279057 scontext=system_u:system_r:iptables_t:s0 tcontext=system_u:object_r:container_file_t:s0:c1022,c1023 tclass=dir permissive=1'] |
||||||||||||||
fail | 7251084 | 2023-04-25 13:09:53 | 2023-04-27 18:13:45 | 2023-04-27 18:45:34 | 0:31:49 | 0:20:48 | 0:11:01 | smithi | main | rhel | 8.6 | orch:cephadm/smoke-roleless/{0-distro/rhel_8.6_container_tools_3.0 0-nvme-loop 1-start 2-services/client-keyring 3-final} | 2 | |
Failure Reason:
timeout expired in wait_for_all_osds_up |
||||||||||||||
dead | 7251085 | 2023-04-25 13:09:54 | 2023-04-27 18:17:46 | 2023-04-27 19:23:55 | 1:06:09 | 0:34:08 | 0:32:01 | smithi | main | centos | 8.stream | orch:cephadm/mgr-nfs-upgrade/{0-centos_8.stream_container_tools 1-bootstrap/16.2.5 1-start 2-nfs 3-upgrade-with-workload 4-final} | 2 | |
Failure Reason:
{'smithi183.front.sepia.ceph.com': {'_ansible_no_log': False, 'changed': False, 'invocation': {'module_args': {'allow_downgrade': False, 'allowerasing': False, 'autoremove': False, 'bugfix': False, 'conf_file': None, 'disable_excludes': None, 'disable_gpg_check': False, 'disable_plugin': [], 'disablerepo': [], 'download_dir': None, 'download_only': False, 'enable_plugin': [], 'enablerepo': [], 'exclude': [], 'install_repoquery': True, 'install_weak_deps': True, 'installroot': '/', 'list': None, 'lock_timeout': 30, 'name': ['krb5-workstation'], 'releasever': None, 'security': False, 'skip_broken': False, 'state': 'present', 'update_cache': False, 'update_only': False, 'validate_certs': True}}, 'msg': "Failed to download metadata for repo 'CentOS-PowerTools': Yum repo downloading error: Downloading error(s): repodata/144bb6d03f4cceafda5d5248f92ece7a8539e8858d490dfec5ebeb61d487bb20-filelists.xml.gz - Cannot download, all mirrors were already tried without success", 'rc': 1, 'results': []}} |
||||||||||||||
fail | 7251086 | 2023-04-25 13:09:55 | 2023-04-27 18:20:07 | 2023-04-27 18:53:54 | 0:33:47 | 0:21:22 | 0:12:25 | smithi | main | rhel | 8.6 | orch:cephadm/with-work/{0-distro/rhel_8.6_container_tools_rhel8 fixed-2 mode/packaged mon_election/classic msgr/async-v1only start tasks/rotate-keys} | 2 | |
Failure Reason:
Command failed on smithi149 with status 1: 'sudo cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:0fac42ee00164c4a9a3aab987f72c8d170d5bcb7 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 70ae53a0-e52c-11ed-9b00-001a4aab830c -- ceph-volume lvm zap /dev/vg_nvme/lv_4' |
||||||||||||||
pass | 7251087 | 2023-04-25 13:09:56 | 2023-04-27 18:26:48 | 2023-04-27 18:48:01 | 0:21:13 | 0:07:04 | 0:14:09 | smithi | main | ubuntu | 20.04 | orch:cephadm/workunits/{0-distro/ubuntu_20.04 agent/off mon_election/classic task/test_cephadm_repos} | 1 | |
fail | 7251088 | 2023-04-25 13:09:57 | 2023-04-27 18:32:17 | 2023-04-27 18:59:51 | 0:27:34 | 0:19:41 | 0:07:53 | smithi | main | rhel | 8.6 | orch:cephadm/smoke-roleless/{0-distro/rhel_8.6_container_tools_rhel8 0-nvme-loop 1-start 2-services/iscsi 3-final} | 2 | |
Failure Reason:
timeout expired in wait_for_all_osds_up |
||||||||||||||
fail | 7251089 | 2023-04-25 13:09:58 | 2023-04-27 18:32:28 | 2023-04-27 18:50:36 | 0:18:08 | 0:11:09 | 0:06:59 | smithi | main | rhel | 8.6 | orch:cephadm/smoke/{0-distro/rhel_8.6_container_tools_rhel8 0-nvme-loop agent/off fixed-2 mon_election/connectivity start} | 2 | |
Failure Reason:
Command failed on smithi081 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:0fac42ee00164c4a9a3aab987f72c8d170d5bcb7 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid ef8578c6-e52b-11ed-9b00-001a4aab830c -- ceph-volume lvm zap /dev/nvme4n1' |
||||||||||||||
fail | 7251090 | 2023-04-25 13:09:59 | 2023-04-27 18:33:38 | 2023-04-27 18:54:46 | 0:21:08 | 0:14:51 | 0:06:17 | smithi | main | rhel | 8.6 | orch:cephadm/thrash/{0-distro/rhel_8.6_container_tools_rhel8 1-start 2-thrash 3-tasks/radosbench fixed-2 msgr/async-v2only root} | 2 | |
Failure Reason:
Command failed on smithi129 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:0fac42ee00164c4a9a3aab987f72c8d170d5bcb7 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 81654172-e52c-11ed-9b00-001a4aab830c -- ceph-volume lvm zap /dev/vg_nvme/lv_4' |
||||||||||||||
fail | 7251091 | 2023-04-25 13:10:00 | 2023-04-27 18:33:39 | 2023-04-27 18:58:20 | 0:24:41 | 0:11:40 | 0:13:01 | smithi | main | ubuntu | 20.04 | orch:cephadm/with-work/{0-distro/ubuntu_20.04 fixed-2 mode/root mon_election/connectivity msgr/async-v2only start tasks/rados_api_tests} | 2 | |
Failure Reason:
Command failed on smithi044 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:0fac42ee00164c4a9a3aab987f72c8d170d5bcb7 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid c78ec5c4-e52c-11ed-9b00-001a4aab830c -- ceph-volume lvm zap /dev/vg_nvme/lv_4' |
||||||||||||||
dead | 7251092 | 2023-04-25 13:10:01 | 2023-04-27 18:35:49 | 2023-04-28 06:50:01 | 12:14:12 | smithi | main | centos | 8.stream | orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} fail_fs/no overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn syntax} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-inline/no 4-verify} 2-client 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 7251093 | 2023-04-25 13:10:01 | 2023-04-27 18:42:51 | 2023-04-27 19:55:28 | 1:12:37 | 0:39:36 | 0:33:01 | smithi | main | centos | 8.stream | orch:cephadm/smoke-small/{0-distro/centos_8.stream_container_tools_crun 0-nvme-loop agent/off fixed-2 mon_election/connectivity start} | 3 | |
Failure Reason:
Command failed on smithi006 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:0fac42ee00164c4a9a3aab987f72c8d170d5bcb7 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid e81b4ce2-e534-11ed-9b00-001a4aab830c -- ceph-volume lvm zap /dev/nvme4n1' |
||||||||||||||
pass | 7251094 | 2023-04-25 13:10:02 | 2023-04-27 18:45:42 | 2023-04-27 19:29:39 | 0:43:57 | 0:34:13 | 0:09:44 | smithi | main | ubuntu | 20.04 | orch:cephadm/upgrade/{1-start-distro/1-start-ubuntu_20.04 2-repo_digest/repo_digest 3-upgrade/simple 4-wait 5-upgrade-ls agent/off mon_election/connectivity} | 2 | |
fail | 7251095 | 2023-04-25 13:10:03 | 2023-04-27 18:45:42 | 2023-04-27 20:47:29 | 2:01:47 | 1:30:07 | 0:31:40 | smithi | main | centos | 8.stream | orch:cephadm/workunits/{0-distro/centos_8.stream_container_tools agent/on mon_election/connectivity task/test_iscsi_container/{centos_8.stream_container_tools test_iscsi_container}} | 1 | |
Failure Reason:
Command failed on smithi090 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:0fac42ee00164c4a9a3aab987f72c8d170d5bcb7 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 264f8292-e53c-11ed-9b00-001a4aab830c -- ceph-volume lvm zap /dev/vg_nvme/lv_4' |
||||||||||||||
fail | 7251096 | 2023-04-25 13:10:04 | 2023-04-27 18:45:42 | 2023-04-27 19:17:52 | 0:32:10 | 0:20:02 | 0:12:08 | smithi | main | ubuntu | 20.04 | orch:cephadm/osds/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-ops/rm-zap-add} | 2 | |
Failure Reason:
timeout expired in wait_for_all_osds_up |
||||||||||||||
fail | 7251097 | 2023-04-25 13:10:05 | 2023-04-27 18:45:53 | 2023-04-27 20:47:04 | 2:01:11 | 1:30:17 | 0:30:54 | smithi | main | centos | 8.stream | orch:cephadm/with-work/{0-distro/centos_8.stream_container_tools fixed-2 mode/packaged mon_election/classic msgr/async start tasks/rados_python} | 2 | |
Failure Reason:
Command failed on smithi086 with status 1: 'sudo cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:0fac42ee00164c4a9a3aab987f72c8d170d5bcb7 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 37ddd86a-e53c-11ed-9b00-001a4aab830c -- ceph-volume lvm zap /dev/vg_nvme/lv_4' |
||||||||||||||
fail | 7251098 | 2023-04-25 13:10:06 | 2023-04-27 18:48:03 | 2023-04-27 19:21:49 | 0:33:46 | 0:20:20 | 0:13:26 | smithi | main | ubuntu | 20.04 | orch:cephadm/smoke-roleless/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-services/jaeger 3-final} | 2 | |
Failure Reason:
timeout expired in wait_for_all_osds_up |
||||||||||||||
fail | 7251099 | 2023-04-25 13:10:07 | 2023-04-27 18:48:54 | 2023-04-27 20:47:48 | 1:58:54 | 1:27:39 | 0:31:15 | smithi | main | centos | 8.stream | orch:cephadm/workunits/{0-distro/centos_8.stream_container_tools_crun agent/off mon_election/classic task/test_nfs} | 1 | |
Failure Reason:
Command failed on smithi196 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:0fac42ee00164c4a9a3aab987f72c8d170d5bcb7 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 3a07a08a-e53c-11ed-9b00-001a4aab830c -- ceph-volume lvm zap /dev/vg_nvme/lv_4' |
||||||||||||||
fail | 7251100 | 2023-04-25 13:10:08 | 2023-04-27 18:50:44 | 2023-04-27 20:53:57 | 2:03:13 | 1:32:24 | 0:30:49 | smithi | main | centos | 8.stream | orch:cephadm/with-work/{0-distro/centos_8.stream_container_tools_crun fixed-2 mode/root mon_election/connectivity msgr/async-v1only start tasks/rotate-keys} | 2 | |
Failure Reason:
Command failed on smithi081 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:0fac42ee00164c4a9a3aab987f72c8d170d5bcb7 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 3e532bcc-e53d-11ed-9b00-001a4aab830c -- ceph-volume lvm zap /dev/vg_nvme/lv_4' |
||||||||||||||
fail | 7251101 | 2023-04-25 13:10:09 | 2023-04-27 18:52:35 | 2023-04-27 20:12:13 | 1:19:38 | 0:48:31 | 0:31:07 | smithi | main | centos | 8.stream | orch:cephadm/smoke-roleless/{0-distro/centos_8.stream_container_tools 0-nvme-loop 1-start 2-services/mirror 3-final} | 2 | |
Failure Reason:
timeout expired in wait_for_all_osds_up |
||||||||||||||
fail | 7251102 | 2023-04-25 13:10:10 | 2023-04-27 18:53:05 | 2023-04-27 19:22:38 | 0:29:33 | 0:21:36 | 0:07:57 | smithi | main | rhel | 8.6 | orch:cephadm/workunits/{0-distro/rhel_8.6_container_tools_3.0 agent/on mon_election/connectivity task/test_orch_cli} | 1 | |
Failure Reason:
Command failed on smithi136 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:0fac42ee00164c4a9a3aab987f72c8d170d5bcb7 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 3d8f0d4e-e530-11ed-9b00-001a4aab830c -- ceph-volume lvm zap /dev/vg_nvme/lv_4' |
||||||||||||||
fail | 7251103 | 2023-04-25 13:10:11 | 2023-04-27 18:53:06 | 2023-04-27 19:14:02 | 0:20:56 | 0:09:44 | 0:11:12 | smithi | main | ubuntu | 20.04 | orch:cephadm/smoke/{0-distro/ubuntu_20.04 0-nvme-loop agent/on fixed-2 mon_election/classic start} | 2 | |
Failure Reason:
Command failed on smithi149 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:0fac42ee00164c4a9a3aab987f72c8d170d5bcb7 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f76498d0-e52e-11ed-9b00-001a4aab830c -- ceph-volume lvm zap /dev/nvme4n1' |
||||||||||||||
fail | 7251104 | 2023-04-25 13:10:12 | 2023-04-27 18:53:56 | 2023-04-27 19:16:50 | 0:22:54 | 0:11:42 | 0:11:12 | smithi | main | ubuntu | 20.04 | orch:cephadm/thrash/{0-distro/ubuntu_20.04 1-start 2-thrash 3-tasks/small-objects fixed-2 msgr/async root} | 2 | |
Failure Reason:
Command failed on smithi172 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:0fac42ee00164c4a9a3aab987f72c8d170d5bcb7 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 5caa8ed4-e52f-11ed-9b00-001a4aab830c -- ceph-volume lvm zap /dev/vg_nvme/lv_4' |
||||||||||||||
fail | 7251105 | 2023-04-25 13:10:13 | 2023-04-27 18:54:57 | 2023-04-27 19:22:06 | 0:27:09 | 0:16:15 | 0:10:54 | smithi | main | rhel | 8.6 | orch:cephadm/with-work/{0-distro/rhel_8.6_container_tools_3.0 fixed-2 mode/packaged mon_election/classic msgr/async-v2only start tasks/rados_api_tests} | 2 | |
Failure Reason:
Command failed on smithi044 with status 1: 'sudo cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:0fac42ee00164c4a9a3aab987f72c8d170d5bcb7 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 2782641a-e530-11ed-9b00-001a4aab830c -- ceph-volume lvm zap /dev/vg_nvme/lv_4' |
||||||||||||||
pass | 7251106 | 2023-04-25 13:10:14 | 2023-04-27 18:58:28 | 2023-04-27 21:19:04 | 2:20:36 | 1:49:20 | 0:31:16 | smithi | main | centos | 8.stream | orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} fail_fs/yes overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn syntax} roles tasks/{0-from/pacific 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-inline/yes 4-verify} 2-client 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} | 2 | |
fail | 7251107 | 2023-04-25 13:10:15 | 2023-04-27 18:59:58 | 2023-04-27 19:33:34 | 0:33:36 | 0:23:15 | 0:10:21 | smithi | main | rhel | 8.6 | orch:cephadm/workunits/{0-distro/rhel_8.6_container_tools_rhel8 agent/off mon_election/classic task/test_orch_cli_mon} | 5 | |
Failure Reason:
Command failed on smithi087 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:0fac42ee00164c4a9a3aab987f72c8d170d5bcb7 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid c50fd446-e531-11ed-9b00-001a4aab830c -- ceph-volume lvm zap /dev/vg_nvme/lv_4' |
||||||||||||||
fail | 7251108 | 2023-04-25 13:10:16 | 2023-04-27 19:04:19 | 2023-04-27 20:24:08 | 1:19:49 | 0:47:16 | 0:32:33 | smithi | main | centos | 8.stream | orch:cephadm/smoke-roleless/{0-distro/centos_8.stream_container_tools_crun 0-nvme-loop 1-start 2-services/nfs-ingress-rgw-bucket 3-final} | 2 | |
Failure Reason:
timeout expired in wait_for_all_osds_up |
||||||||||||||
fail | 7251109 | 2023-04-25 13:10:17 | 2023-04-27 19:06:40 | 2023-04-27 19:29:41 | 0:23:01 | 0:16:44 | 0:06:17 | smithi | main | rhel | 8.6 | orch:cephadm/with-work/{0-distro/rhel_8.6_container_tools_rhel8 fixed-2 mode/root mon_election/connectivity msgr/async start tasks/rados_python} | 2 | |
Failure Reason:
Command failed on smithi031 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:0fac42ee00164c4a9a3aab987f72c8d170d5bcb7 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 4fb8f880-e531-11ed-9b00-001a4aab830c -- ceph-volume lvm zap /dev/vg_nvme/lv_4' |
||||||||||||||
fail | 7251110 | 2023-04-25 13:10:18 | 2023-04-27 19:07:00 | 2023-04-27 20:25:03 | 1:18:03 | 0:47:35 | 0:30:28 | smithi | main | centos | 8.stream | orch:cephadm/osds/{0-distro/centos_8.stream_container_tools 0-nvme-loop 1-start 2-ops/rm-zap-flag} | 2 | |
Failure Reason:
timeout expired in wait_for_all_osds_up |
||||||||||||||
fail | 7251111 | 2023-04-25 13:10:19 | 2023-04-27 19:07:51 | 2023-04-27 19:36:01 | 0:28:10 | 0:12:33 | 0:15:37 | smithi | main | ubuntu | 20.04 | orch:cephadm/workunits/{0-distro/ubuntu_20.04 agent/on mon_election/connectivity task/test_rgw_multisite} | 3 | |
Failure Reason:
Command failed on smithi019 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:0fac42ee00164c4a9a3aab987f72c8d170d5bcb7 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d0e00f66-e531-11ed-9b00-001a4aab830c -- ceph-volume lvm zap /dev/vg_nvme/lv_4' |
||||||||||||||
fail | 7251112 | 2023-04-25 13:10:20 | 2023-04-27 19:10:32 | 2023-04-27 19:35:31 | 0:24:59 | 0:11:48 | 0:13:11 | smithi | main | ubuntu | 20.04 | orch:cephadm/with-work/{0-distro/ubuntu_20.04 fixed-2 mode/packaged mon_election/classic msgr/async-v1only start tasks/rotate-keys} | 2 | |
Failure Reason:
Command failed on smithi120 with status 1: 'sudo cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:0fac42ee00164c4a9a3aab987f72c8d170d5bcb7 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d1c3ea7e-e531-11ed-9b00-001a4aab830c -- ceph-volume lvm zap /dev/vg_nvme/lv_4' |