User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Pass | Fail | Dead |
---|---|---|---|---|---|---|---|---|---|---|---|
adking | 2022-12-14 19:53:30 | 2022-12-15 03:48:41 | 2022-12-15 05:07:16 | 1:18:35 | orch:cephadm | wip-adk-testing-2022-12-14-1132 | smithi | bd53b5f | 24 | 59 | 14 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
fail | 7116678 | 2022-12-14 19:54:45 | 2022-12-14 20:03:05 | 2022-12-14 20:28:51 | 0:25:46 | 0:15:24 | 0:10:22 | smithi | main | centos | 8.stream | orch:cephadm/osds/{0-distro/centos_8.stream_container_tools 0-nvme-loop 1-start 2-ops/rm-zap-add} | 2 | |
Failure Reason:
"/var/log/ceph/7ff1c79c-7bec-11ed-8443-001a4aab830c/ceph-mon.smithi090.log:2022-12-14T20:24:24.519+0000 7fa617502700 0 log_channel(cluster) log [WRN] : Health check failed: 1 osds down (OSD_DOWN)" in cluster log |
||||||||||||||
dead | 7116679 | 2022-12-14 19:54:56 | 2022-12-14 20:03:50 | 2022-12-14 20:05:25 | 0:01:35 | smithi | main | centos | 8.stream | orch:cephadm/workunits/{0-distro/rhel_8.6_container_tools_3.0 agent/on mon_election/connectivity task/test_iscsi_pids_limit/{centos_8.stream_container_tools test_iscsi_pids_limit}} | 1 | |||
Failure Reason:
Error reimaging machines: Failed to power on smithi140 |
||||||||||||||
pass | 7116680 | 2022-12-14 19:54:57 | 2022-12-14 20:03:51 | 2022-12-14 20:43:07 | 0:39:16 | 0:29:06 | 0:10:10 | smithi | main | rhel | 8.6 | orch:cephadm/with-work/{0-distro/rhel_8.6_container_tools_3.0 fixed-2 mode/packaged mon_election/classic msgr/async-v1only start tasks/rados_python} | 2 | |
dead | 7116681 | 2022-12-14 19:54:58 | 2022-12-14 20:05:00 | 2022-12-14 20:11:19 | 0:06:19 | smithi | main | centos | 8.stream | orch:cephadm/smoke-roleless/{0-distro/centos_8.stream_container_tools_crun 0-nvme-loop 1-start 2-services/jaeger 3-final} | 2 | |||
Failure Reason:
Error reimaging machines: 'NoneType' object has no attribute '_fields' |
||||||||||||||
fail | 7116682 | 2022-12-14 19:55:04 | 2022-12-14 20:05:16 | 2022-12-14 20:48:43 | 0:43:27 | 0:33:45 | 0:09:42 | smithi | main | centos | 8.stream | orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} fail_fs/no overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn syntax} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-inline/no 4-verify} 2-client 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} | 2 | |
Failure Reason:
"/var/log/ceph/3fbd8e62-7bed-11ed-8443-001a4aab830c/ceph-mon.smithi018.log:2022-12-14T20:32:33.843+0000 7f7744963700 10 mon.smithi018@0(leader).log v331 logging 2022-12-14T20:32:33.176441+0000 mgr.smithi060.jzvuyw (mgr.24421) 64 : cephadm [WRN] unable to calc client keyring client.admin placement PlacementSpec(label='_admin'): Cannot place <ServiceSpec for service_name=mon>: No matching hosts for label _admin" in cluster log |
||||||||||||||
fail | 7116683 | 2022-12-14 19:55:21 | 2022-12-14 20:05:26 | 2022-12-14 20:56:02 | 0:50:36 | 0:39:21 | 0:11:15 | smithi | main | centos | 8.stream | orch:cephadm/mgr-nfs-upgrade/{0-centos_8.stream_container_tools 1-bootstrap/16.2.0 1-start 2-nfs 3-upgrade-with-workload 4-final} | 2 | |
Failure Reason:
"/var/log/ceph/22b2870a-7bed-11ed-8443-001a4aab830c/ceph-mon.smithi136.log:2022-12-14T20:31:52.947+0000 7f0c491f1700 10 mon.smithi136@0(leader).log v345 logging 2022-12-14T20:31:52.443901+0000 mgr.smithi161.zojiwa (mgr.24493) 55 : cephadm [WRN] unable to calc client keyring client.admin placement PlacementSpec(label='_admin'): Cannot place <ServiceSpec for service_name=mon>: No matching hosts for label _admin" in cluster log |
||||||||||||||
pass | 7116684 | 2022-12-14 19:55:25 | 2022-12-14 20:05:46 | 2022-12-14 20:34:05 | 0:28:19 | 0:16:22 | 0:11:57 | smithi | main | centos | 8.stream | orch:cephadm/orchestrator_cli/{0-random-distro$/{centos_8.stream_container_tools_crun} 2-node-mgr agent/off orchestrator_cli} | 2 | |
fail | 7116685 | 2022-12-14 19:55:26 | 2022-12-14 20:08:07 | 2022-12-14 20:56:27 | 0:48:20 | 0:37:52 | 0:10:28 | smithi | main | centos | 8.stream | orch:cephadm/rbd_iscsi/{0-single-container-host base/install cluster/{fixed-3 openstack} workloads/cephadm_iscsi} | 3 | |
Failure Reason:
"/var/log/ceph/5371e2f0-7bed-11ed-8443-001a4aab830c/ceph-mon.a.log:2022-12-14T20:27:18.514+0000 7f354145f700 0 log_channel(cluster) log [WRN] : Health check failed: 1/3 mons down, quorum a,c (MON_DOWN)" in cluster log |
||||||||||||||
pass | 7116686 | 2022-12-14 19:55:32 | 2022-12-14 20:08:15 | 2022-12-14 20:31:37 | 0:23:22 | 0:13:38 | 0:09:44 | smithi | main | centos | 8.stream | orch:cephadm/smoke-singlehost/{0-random-distro$/{centos_8.stream_container_tools_crun} 1-start 2-services/basic 3-final} | 1 | |
fail | 7116687 | 2022-12-14 19:55:38 | 2022-12-14 20:08:20 | 2022-12-14 21:14:38 | 1:06:18 | 0:55:38 | 0:10:40 | smithi | main | centos | 8.stream | orch:cephadm/upgrade/{1-start-distro/1-start-centos_8.stream_container-tools 2-repo_digest/defaut 3-upgrade/staggered 4-wait 5-upgrade-ls agent/off mon_election/classic} | 2 | |
Failure Reason:
Command failed on smithi079 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v16.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 2f353036-7bed-11ed-8443-001a4aab830c -e sha1=bd53b5fa4e346bda7a32ffb27cc867da30f64894 -- bash -c \'ceph versions | jq -e \'"\'"\'.osd | length == 2\'"\'"\'\'' |
||||||||||||||
fail | 7116688 | 2022-12-14 19:55:55 | 2022-12-14 20:08:26 | 2022-12-14 20:46:43 | 0:38:17 | 0:29:21 | 0:08:56 | smithi | main | rhel | 8.6 | orch:cephadm/with-work/{0-distro/rhel_8.6_container_tools_rhel8 fixed-2 mode/root mon_election/connectivity msgr/async-v2only start tasks/rotate-keys} | 2 | |
Failure Reason:
"/var/log/ceph/113c1e5e-7bee-11ed-8443-001a4aab830c/ceph-mon.c.log:2022-12-14T20:32:28.072+0000 7f0b6d1eb700 7 mon.c@2(synchronizing).log v60 update_from_paxos applying incremental log 59 2022-12-14T20:32:26.078101+0000 mon.a (mon.0) 174 : cluster [WRN] Health check failed: 1/3 mons down, quorum a,b (MON_DOWN)" in cluster log |
||||||||||||||
fail | 7116689 | 2022-12-14 19:56:06 | 2022-12-14 20:08:31 | 2022-12-14 20:48:28 | 0:39:57 | 0:31:14 | 0:08:43 | smithi | main | rhel | 8.6 | orch:cephadm/workunits/{0-distro/rhel_8.6_container_tools_rhel8 agent/off mon_election/classic task/test_nfs} | 1 | |
Failure Reason:
"/var/log/ceph/d165eb5c-7bed-11ed-8443-001a4aab830c/ceph-mon.a.log:2022-12-14T20:33:27.914+0000 7fa940af6700 0 log_channel(cluster) log [WRN] : Replacing daemon mds.a.smithi112.ozogfz as rank 0 with standby daemon mds.user_test_fs.smithi112.jvrirl" in cluster log |
||||||||||||||
pass | 7116690 | 2022-12-14 19:56:17 | 2022-12-14 20:08:56 | 2022-12-14 20:36:06 | 0:27:10 | 0:18:05 | 0:09:05 | smithi | main | rhel | 8.6 | orch:cephadm/smoke-roleless/{0-distro/rhel_8.6_container_tools_3.0 0-nvme-loop 1-start 2-services/mirror 3-final} | 2 | |
fail | 7116691 | 2022-12-14 19:56:28 | 2022-12-14 20:09:31 | 2022-12-14 20:40:11 | 0:30:40 | 0:18:57 | 0:11:43 | smithi | main | rhel | 8.6 | orch:cephadm/smoke/{0-distro/rhel_8.6_container_tools_rhel8 0-nvme-loop agent/off fixed-2 mon_election/classic start} | 2 | |
Failure Reason:
"/var/log/ceph/d98abfb0-7bed-11ed-8443-001a4aab830c/ceph-mon.c.log:2022-12-14T20:31:05.082+0000 7f261a779700 7 mon.c@2(synchronizing).log v60 update_from_paxos applying incremental log 59 2022-12-14T20:31:03.098510+0000 mon.a (mon.0) 173 : cluster [WRN] Health check failed: 1/3 mons down, quorum a,b (MON_DOWN)" in cluster log |
||||||||||||||
dead | 7116692 | 2022-12-14 19:56:34 | 2022-12-14 20:10:33 | 2022-12-14 20:13:36 | 0:03:03 | smithi | main | rhel | 8.6 | orch:cephadm/thrash/{0-distro/rhel_8.6_container_tools_rhel8 1-start 2-thrash 3-tasks/rados_api_tests fixed-2 msgr/async root} | 2 | |||
Failure Reason:
Error reimaging machines: Failed to power on smithi105 |
||||||||||||||
dead | 7116693 | 2022-12-14 19:56:49 | 2022-12-14 20:11:10 | 2022-12-14 21:10:45 | 0:59:35 | smithi | main | ubuntu | 20.04 | orch:cephadm/with-work/{0-distro/ubuntu_20.04 fixed-2 mode/packaged mon_election/classic msgr/async start tasks/rados_api_tests} | 2 | |||
Failure Reason:
Error reimaging machines: reached maximum tries (100) after waiting for 600 seconds |
||||||||||||||
dead | 7116694 | 2022-12-14 19:56:54 | 2022-12-14 20:11:52 | 2022-12-14 20:45:13 | 0:33:21 | smithi | main | ubuntu | 20.04 | orch:cephadm/workunits/{0-distro/ubuntu_20.04 agent/on mon_election/connectivity task/test_orch_cli} | 1 | |||
Failure Reason:
Error reimaging machines: reached maximum tries (100) after waiting for 600 seconds |
||||||||||||||
dead | 7116695 | 2022-12-14 19:57:00 | 2022-12-14 20:11:52 | 2022-12-14 20:42:10 | 0:30:18 | smithi | main | rhel | 8.6 | orch:cephadm/smoke-roleless/{0-distro/rhel_8.6_container_tools_rhel8 0-nvme-loop 1-start 2-services/nfs-ingress-rgw-bucket 3-final} | 2 | |||
Failure Reason:
Error reimaging machines: reached maximum tries (100) after waiting for 600 seconds |
||||||||||||||
dead | 7116696 | 2022-12-14 19:57:05 | 2022-12-14 20:11:53 | 2022-12-14 20:22:52 | 0:10:59 | 0:02:24 | 0:08:35 | smithi | main | centos | 8.stream | orch:cephadm/with-work/{0-distro/centos_8.stream_container_tools fixed-2 mode/root mon_election/connectivity msgr/async-v1only start tasks/rados_python} | 2 | |
Failure Reason:
{'smithi035.front.sepia.ceph.com': {'changed': False, 'msg': 'All items completed', 'results': [{'_ansible_item_label': {'key': 'vg_nvme', 'value': {'pvs': '/dev/nvme0n1'}}, '_ansible_no_log': False, 'ansible_loop_var': 'item', 'changed': False, 'err': " /dev/vg_nvme: already exists in filesystem\n Run `vgcreate --help' for more information.\n", 'failed': True, 'invocation': {'module_args': {'force': False, 'pesize': '4', 'pv_options': '', 'pvresize': False, 'pvs': ['/dev/nvme0n1'], 'state': 'present', 'vg': 'vg_nvme', 'vg_options': ''}}, 'item': {'key': 'vg_nvme', 'value': {'pvs': '/dev/nvme0n1'}}, 'msg': "Creating volume group 'vg_nvme' failed", 'rc': 3}]}, 'smithi146.front.sepia.ceph.com': {'changed': False, 'msg': 'All items completed', 'results': [{'_ansible_item_label': {'key': 'vg_nvme', 'value': {'pvs': '/dev/nvme0n1'}}, '_ansible_no_log': False, 'ansible_loop_var': 'item', 'changed': False, 'err': " /dev/vg_nvme: already exists in filesystem\n Run `vgcreate --help' for more information.\n", 'failed': True, 'invocation': {'module_args': {'force': False, 'pesize': '4', 'pv_options': '', 'pvresize': False, 'pvs': ['/dev/nvme0n1'], 'state': 'present', 'vg': 'vg_nvme', 'vg_options': ''}}, 'item': {'key': 'vg_nvme', 'value': {'pvs': '/dev/nvme0n1'}}, 'msg': "Creating volume group 'vg_nvme' failed", 'rc': 3}]}} |
||||||||||||||
fail | 7116697 | 2022-12-14 19:57:06 | 2022-12-14 20:12:40 | 2022-12-14 20:41:02 | 0:28:22 | 0:14:55 | 0:13:27 | smithi | main | centos | 8.stream | orch:cephadm/osds/{0-distro/centos_8.stream_container_tools_crun 0-nvme-loop 1-start 2-ops/rm-zap-flag} | 2 | |
Failure Reason:
"/var/log/ceph/404e19f4-7bee-11ed-8443-001a4aab830c/ceph-mon.smithi119.log:2022-12-14T20:36:44.104+0000 7f6d84080700 0 log_channel(cluster) log [WRN] : Health check failed: 1 osds down (OSD_DOWN)" in cluster log |
||||||||||||||
fail | 7116698 | 2022-12-14 19:57:07 | 2022-12-14 20:14:05 | 2022-12-14 20:58:08 | 0:44:03 | 0:33:14 | 0:10:49 | smithi | main | centos | 8.stream | orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} fail_fs/yes overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn syntax} roles tasks/{0-from/pacific 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-inline/yes 4-verify} 2-client 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} | 2 | |
Failure Reason:
"/var/log/ceph/78bd4d46-7bee-11ed-8443-001a4aab830c/ceph-mon.smithi089.log:2022-12-14T20:39:01.467+0000 7fd746c2e700 0 log_channel(cluster) log [WRN] : Health check failed: insufficient standby MDS daemons available (MDS_INSUFFICIENT_STANDBY)" in cluster log |
||||||||||||||
fail | 7116699 | 2022-12-14 19:57:14 | 2022-12-14 20:15:00 | 2022-12-14 20:59:33 | 0:44:33 | 0:27:48 | 0:16:45 | smithi | main | centos | 8.stream | orch:cephadm/workunits/{0-distro/centos_8.stream_container_tools agent/off mon_election/classic task/test_orch_cli_mon} | 5 | |
Failure Reason:
"/var/log/ceph/77b6d4b6-7bef-11ed-8443-001a4aab830c/ceph-mon.a.log:2022-12-14T20:43:36.542+0000 7fbea91f4700 0 log_channel(cluster) log [WRN] : Health check failed: 2/5 mons down, quorum a,e,c (MON_DOWN)" in cluster log |
||||||||||||||
dead | 7116700 | 2022-12-14 19:57:19 | 2022-12-14 20:17:36 | 2022-12-14 20:20:34 | 0:02:58 | smithi | main | centos | 8.stream | orch:cephadm/with-work/{0-distro/centos_8.stream_container_tools_crun fixed-2 mode/packaged mon_election/classic msgr/async-v2only start tasks/rotate-keys} | 2 | |||
Failure Reason:
Error reimaging machines: Failed to power on smithi047 |
||||||||||||||
fail | 7116701 | 2022-12-14 19:57:25 | 2022-12-14 20:17:57 | 2022-12-14 20:52:43 | 0:34:46 | 0:20:26 | 0:14:20 | smithi | main | ubuntu | 20.04 | orch:cephadm/smoke-roleless/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-services/nfs-ingress-rgw-user 3-final} | 2 | |
Failure Reason:
"/var/log/ceph/d12b4eb0-7bee-11ed-8443-001a4aab830c/ceph-mon.smithi145.log:2022-12-14T20:48:22.063+0000 7f887c3a6700 0 log_channel(cluster) log [WRN] : Health check failed: 1 stray daemon(s) not managed by cephadm (CEPHADM_STRAY_DAEMON)" in cluster log |
||||||||||||||
fail | 7116702 | 2022-12-14 19:57:31 | 2022-12-14 20:19:05 | 2022-12-14 21:01:00 | 0:41:55 | 0:25:15 | 0:16:40 | smithi | main | ubuntu | 20.04 | orch:cephadm/smoke/{0-distro/ubuntu_20.04 0-nvme-loop agent/on fixed-2 mon_election/connectivity start} | 2 | |
Failure Reason:
"/var/log/ceph/183e0dce-7bef-11ed-8443-001a4aab830c/ceph-mon.a.log:2022-12-14T20:42:08.149+0000 7fa963f93700 0 log_channel(cluster) log [WRN] : Health check failed: 1 stray daemon(s) not managed by cephadm (CEPHADM_STRAY_DAEMON)" in cluster log |
||||||||||||||
fail | 7116703 | 2022-12-14 19:57:48 | 2022-12-14 20:22:13 | 2022-12-14 21:39:36 | 1:17:23 | 1:03:49 | 0:13:34 | smithi | main | ubuntu | 20.04 | orch:cephadm/thrash/{0-distro/ubuntu_20.04 1-start 2-thrash 3-tasks/radosbench fixed-2 msgr/async-v1only root} | 2 | |
Failure Reason:
"/var/log/ceph/84d6f78e-7bef-11ed-8443-001a4aab830c/ceph-mon.a.log:2022-12-14T20:43:49.947+0000 7f3867bd1700 0 log_channel(cluster) log [WRN] : Health check failed: 1/3 mons down, quorum a,b (MON_DOWN)" in cluster log |
||||||||||||||
fail | 7116704 | 2022-12-14 19:57:54 | 2022-12-14 20:22:18 | 2022-12-14 21:16:09 | 0:53:51 | 0:42:32 | 0:11:19 | smithi | main | rhel | 8.6 | orch:cephadm/with-work/{0-distro/rhel_8.6_container_tools_3.0 fixed-2 mode/root mon_election/connectivity msgr/async start tasks/rados_api_tests} | 2 | |
Failure Reason:
"/var/log/ceph/379e8d3c-7bf0-11ed-8443-001a4aab830c/ceph-mon.c.log:2022-12-14T20:48:27.903+0000 7f0c64f2f700 7 mon.c@2(synchronizing).log v60 update_from_paxos applying incremental log 59 2022-12-14T20:48:25.921254+0000 mon.a (mon.0) 174 : cluster [WRN] Health check failed: 1/3 mons down, quorum a,b (MON_DOWN)" in cluster log |
||||||||||||||
pass | 7116705 | 2022-12-14 19:58:10 | 2022-12-14 20:22:52 | 2022-12-14 20:43:31 | 0:20:39 | 0:13:12 | 0:07:27 | smithi | main | centos | 8.stream | orch:cephadm/workunits/{0-distro/centos_8.stream_container_tools_crun agent/on mon_election/connectivity task/test_adoption} | 1 | |
fail | 7116706 | 2022-12-14 19:58:11 | 2022-12-14 20:23:28 | 2022-12-14 20:54:49 | 0:31:21 | 0:18:04 | 0:13:17 | smithi | main | centos | 8.stream | orch:cephadm/smoke-roleless/{0-distro/centos_8.stream_container_tools 0-nvme-loop 1-start 2-services/nfs-ingress 3-final} | 2 | |
Failure Reason:
"/var/log/ceph/a821e60e-7bef-11ed-8443-001a4aab830c/ceph-mon.smithi005.log:2022-12-14T20:48:02.981+0000 7f3758838700 0 log_channel(cluster) log [WRN] : Health check failed: Reduced data availability: 1 pg inactive, 1 pg peering (PG_AVAILABILITY)" in cluster log |
||||||||||||||
pass | 7116707 | 2022-12-14 19:58:18 | 2022-12-14 20:23:54 | 2022-12-14 21:02:02 | 0:38:08 | 0:28:37 | 0:09:31 | smithi | main | rhel | 8.6 | orch:cephadm/with-work/{0-distro/rhel_8.6_container_tools_rhel8 fixed-2 mode/packaged mon_election/classic msgr/async-v1only start tasks/rados_python} | 2 | |
fail | 7116708 | 2022-12-14 19:58:19 | 2022-12-14 20:23:54 | 2022-12-14 21:08:54 | 0:45:00 | 0:33:10 | 0:11:50 | smithi | main | centos | 8.stream | orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} fail_fs/no overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn syntax} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-inline/no 4-verify} 2-client 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} | 2 | |
Failure Reason:
"/var/log/ceph/017eea1c-7bf0-11ed-8443-001a4aab830c/ceph-mon.smithi146.log:2022-12-14T20:52:42.125+0000 7f8889807700 10 mon.smithi146@0(leader).log v348 logging 2022-12-14T20:52:42.099827+0000 mgr.smithi146.aqaqnp (mgr.14672) 22 : cephadm [ERR] cephadm exited with an error code: 1, stderr: Non-zero exit code 125 from /usr/bin/podman container inspect --format {{.State.Status}} ceph-017eea1c-7bf0-11ed-8443-001a4aab830c-prometheus-smithi146" in cluster log |
||||||||||||||
fail | 7116709 | 2022-12-14 19:58:20 | 2022-12-14 20:24:25 | 2022-12-14 21:11:53 | 0:47:28 | 0:33:51 | 0:13:37 | smithi | main | ubuntu | 20.04 | orch:cephadm/upgrade/{1-start-distro/1-start-ubuntu_20.04 2-repo_digest/repo_digest 3-upgrade/simple 4-wait 5-upgrade-ls agent/on mon_election/connectivity} | 2 | |
Failure Reason:
"/var/log/ceph/ab3a2036-7bef-11ed-8443-001a4aab830c/ceph-mon.a.log:2022-12-14T20:47:45.406+0000 7f871cc27700 0 log_channel(cluster) log [WRN] : Health check failed: Reduced data availability: 1 pg inactive, 1 pg peering (PG_AVAILABILITY)" in cluster log |
||||||||||||||
pass | 7116710 | 2022-12-14 19:58:31 | 2022-12-14 20:24:49 | 2022-12-14 20:55:47 | 0:30:58 | 0:21:35 | 0:09:23 | smithi | main | rhel | 8.6 | orch:cephadm/workunits/{0-distro/rhel_8.6_container_tools_3.0 agent/off mon_election/classic task/test_cephadm} | 1 | |
fail | 7116711 | 2022-12-14 19:58:37 | 2022-12-14 20:25:14 | 2022-12-14 20:53:32 | 0:28:18 | 0:17:39 | 0:10:39 | smithi | main | rhel | 8.6 | orch:cephadm/osds/{0-distro/rhel_8.6_container_tools_3.0 0-nvme-loop 1-start 2-ops/rm-zap-wait} | 2 | |
Failure Reason:
"/var/log/ceph/defd45d8-7bef-11ed-8443-001a4aab830c/ceph-mon.smithi029.log:2022-12-14T20:49:32.881+0000 7fbc38bca700 0 log_channel(cluster) log [WRN] : Health check failed: 1 osds down (OSD_DOWN)" in cluster log |
||||||||||||||
fail | 7116712 | 2022-12-14 19:58:44 | 2022-12-14 20:25:30 | 2022-12-14 21:22:58 | 0:57:28 | 0:38:35 | 0:18:53 | smithi | main | ubuntu | 20.04 | orch:cephadm/with-work/{0-distro/ubuntu_20.04 fixed-2 mode/root mon_election/connectivity msgr/async-v2only start tasks/rotate-keys} | 2 | |
Failure Reason:
"/var/log/ceph/b6918658-7bf0-11ed-8443-001a4aab830c/ceph-mon.a.log:2022-12-14T20:59:04.650+0000 7f330b504700 0 log_channel(cluster) log [WRN] : Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log |
||||||||||||||
pass | 7116713 | 2022-12-14 19:58:53 | 2022-12-14 20:29:29 | 2022-12-14 21:00:12 | 0:30:43 | 0:17:52 | 0:12:51 | smithi | main | centos | 8.stream | orch:cephadm/smoke-roleless/{0-distro/centos_8.stream_container_tools_crun 0-nvme-loop 1-start 2-services/nfs-ingress2 3-final} | 2 | |
pass | 7116714 | 2022-12-14 19:58:57 | 2022-12-14 20:32:17 | 2022-12-14 20:51:11 | 0:18:54 | 0:10:39 | 0:08:15 | smithi | main | rhel | 8.6 | orch:cephadm/workunits/{0-distro/rhel_8.6_container_tools_rhel8 agent/on mon_election/connectivity task/test_cephadm_repos} | 1 | |
fail | 7116715 | 2022-12-14 19:59:13 | 2022-12-14 20:33:06 | 2022-12-14 21:22:59 | 0:49:53 | 0:38:30 | 0:11:23 | smithi | main | centos | 8.stream | orch:cephadm/mgr-nfs-upgrade/{0-centos_8.stream_container_tools 1-bootstrap/16.2.4 1-start 2-nfs 3-upgrade-with-workload 4-final} | 2 | |
Failure Reason:
"/var/log/ceph/fff080ba-7bf0-11ed-8443-001a4aab830c/ceph-mon.smithi026.log:2022-12-14T20:59:47.121+0000 7f3d58b20700 10 mon.smithi026@0(leader).log v348 logging 2022-12-14T20:59:46.129735+0000 mgr.smithi061.ztyeze (mgr.14652) 55 : cephadm [WRN] unable to calc client keyring client.admin placement PlacementSpec(label='_admin'): Cannot place <ServiceSpec for service_name=mon>: No matching hosts for label _admin" in cluster log |
||||||||||||||
fail | 7116716 | 2022-12-14 19:59:18 | 2022-12-14 20:34:43 | 2022-12-14 20:52:31 | 0:17:48 | 0:08:17 | 0:09:31 | smithi | main | centos | 8.stream | orch:cephadm/smoke/{0-distro/centos_8.stream_container_tools 0-nvme-loop agent/on fixed-2 mon_election/classic start} | 2 | |
Failure Reason:
cannot pull file with status: building |
||||||||||||||
fail | 7116717 | 2022-12-14 19:59:22 | 2022-12-14 20:36:21 | 2022-12-14 21:22:18 | 0:45:57 | 0:33:37 | 0:12:20 | smithi | main | centos | 8.stream | orch:cephadm/thrash/{0-distro/centos_8.stream_container_tools 1-start 2-thrash 3-tasks/small-objects fixed-2 msgr/async-v2only root} | 2 | |
Failure Reason:
"/var/log/ceph/c45f1ba0-7bf1-11ed-8443-001a4aab830c/ceph-mon.c.log:2022-12-14T21:11:49.369+0000 7f69d6419700 7 mon.c@2(peon).log v730 update_from_paxos applying incremental log 730 2022-12-14T21:11:48.359696+0000 mon.a (mon.0) 1756 : cluster [ERR] Health check failed: full ratio(s) out of order (OSD_OUT_OF_ORDER_FULL)" in cluster log |
||||||||||||||
fail | 7116718 | 2022-12-14 19:59:27 | 2022-12-14 20:38:35 | 2022-12-14 21:28:40 | 0:50:05 | 0:39:03 | 0:11:02 | smithi | main | centos | 8.stream | orch:cephadm/with-work/{0-distro/centos_8.stream_container_tools fixed-2 mode/packaged mon_election/classic msgr/async-v2only start tasks/rados_api_tests} | 2 | |
Failure Reason:
"/var/log/ceph/4548b866-7bf2-11ed-8443-001a4aab830c/ceph-mon.c.log:2022-12-14T21:10:01.080+0000 7fa21aaa0700 7 mon.c@2(peon).log v396 update_from_paxos applying incremental log 396 2022-12-14T21:10:00.064548+0000 mon.a (mon.0) 1459 : cluster [WRN] Health detail: HEALTH_WARN 1 pool(s) do not have an application enabled" in cluster log |
||||||||||||||
fail | 7116719 | 2022-12-14 19:59:43 | 2022-12-14 20:41:02 | 2022-12-14 21:08:54 | 0:27:52 | 0:17:51 | 0:10:01 | smithi | main | rhel | 8.6 | orch:cephadm/smoke-roleless/{0-distro/rhel_8.6_container_tools_3.0 0-nvme-loop 1-start 2-services/nfs 3-final} | 2 | |
Failure Reason:
"/var/log/ceph/136aa8ea-7bf2-11ed-8443-001a4aab830c/ceph-mon.smithi119.log:2022-12-14T21:03:38.737+0000 7fd1d44f8700 0 log_channel(cluster) log [WRN] : Health check failed: Degraded data redundancy: 2/6 objects degraded (33.333%), 1 pg degraded (PG_DEGRADED)" in cluster log |
||||||||||||||
pass | 7116720 | 2022-12-14 19:59:54 | 2022-12-14 20:41:26 | 2022-12-14 21:06:03 | 0:24:37 | 0:16:40 | 0:07:57 | smithi | main | centos | 8.stream | orch:cephadm/workunits/{0-distro/ubuntu_20.04 agent/off mon_election/classic task/test_iscsi_pids_limit/{centos_8.stream_container_tools test_iscsi_pids_limit}} | 1 | |
fail | 7116721 | 2022-12-14 20:00:05 | 2022-12-14 20:41:27 | 2022-12-14 20:52:21 | 0:10:54 | smithi | main | centos | 8.stream | orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} fail_fs/yes overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn syntax} roles tasks/{0-from/pacific 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-inline/yes 4-verify} 2-client 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} | 2 | |||
Failure Reason:
Failed to fetch package version from https://shaman.ceph.com/api/search/?status=ready&project=ceph&flavor=default&distros=centos%2F8%2Fx86_64&ref=wip-adk-testing-2022-12-14-1132 |
||||||||||||||
fail | 7116722 | 2022-12-14 20:00:08 | 2022-12-14 20:43:12 | 2022-12-14 21:18:44 | 0:35:32 | 0:26:04 | 0:09:28 | smithi | main | centos | 8.stream | orch:cephadm/with-work/{0-distro/centos_8.stream_container_tools_crun fixed-2 mode/root mon_election/connectivity msgr/async start tasks/rados_python} | 2 | |
Failure Reason:
"/var/log/ceph/849ec73a-7bf2-11ed-8443-001a4aab830c/ceph-mon.c.log:2022-12-14T21:10:00.593+0000 7f421c8db700 7 mon.c@2(peon).log v320 update_from_paxos applying incremental log 320 2022-12-14T21:10:00.000200+0000 mon.a (mon.0) 847 : cluster [WRN] Health detail: HEALTH_WARN noup flag(s) set; 3 osds down; Reduced data availability: 18 pgs inactive; Degraded data redundancy: 224/585 objects degraded (38.291%), 44 pgs degraded" in cluster log |
||||||||||||||
fail | 7116723 | 2022-12-14 20:00:14 | 2022-12-14 20:43:13 | 2022-12-14 20:52:52 | 0:09:39 | smithi | main | rhel | 8.6 | orch:cephadm/smoke-roleless/{0-distro/rhel_8.6_container_tools_rhel8 0-nvme-loop 1-start 2-services/nfs2 3-final} | 2 | |||
Failure Reason:
Failed to fetch package version from https://shaman.ceph.com/api/search/?status=ready&project=ceph&flavor=default&distros=centos%2F8%2Fx86_64&ref=wip-adk-testing-2022-12-14-1132 |
||||||||||||||
fail | 7116724 | 2022-12-14 20:00:20 | 2022-12-14 20:43:42 | 2022-12-14 20:53:17 | 0:09:35 | smithi | main | rhel | 8.6 | orch:cephadm/with-work/{0-distro/rhel_8.6_container_tools_3.0 fixed-2 mode/packaged mon_election/classic msgr/async-v1only start tasks/rotate-keys} | 2 | |||
Failure Reason:
Failed to fetch package version from https://shaman.ceph.com/api/search/?status=ready&project=ceph&flavor=default&distros=centos%2F8%2Fx86_64&ref=wip-adk-testing-2022-12-14-1132 |
||||||||||||||
fail | 7116725 | 2022-12-14 20:00:26 | 2022-12-14 20:44:33 | 2022-12-14 20:52:56 | 0:08:23 | smithi | main | centos | 8.stream | orch:cephadm/workunits/{0-distro/centos_8.stream_container_tools agent/on mon_election/connectivity task/test_nfs} | 1 | |||
Failure Reason:
Failed to fetch package version from https://shaman.ceph.com/api/search/?status=ready&project=ceph&flavor=default&distros=centos%2F8%2Fx86_64&ref=wip-adk-testing-2022-12-14-1132 |
||||||||||||||
fail | 7116726 | 2022-12-14 20:00:35 | 2022-12-14 20:44:34 | 2022-12-14 21:13:58 | 0:29:24 | 0:17:56 | 0:11:28 | smithi | main | rhel | 8.6 | orch:cephadm/osds/{0-distro/rhel_8.6_container_tools_rhel8 0-nvme-loop 1-start 2-ops/rmdir-reactivate} | 2 | |
Failure Reason:
"/var/log/ceph/cf7f5b16-7bf2-11ed-8443-001a4aab830c/ceph-mon.smithi055.log:2022-12-14T21:09:31.081+0000 7fc24d4b5700 0 log_channel(cluster) log [WRN] : Health check failed: 1 osds down (OSD_DOWN)" in cluster log |
||||||||||||||
dead | 7116727 | 2022-12-14 20:00:42 | 2022-12-14 20:46:53 | 2022-12-14 20:52:24 | 0:05:31 | smithi | main | centos | 8.stream | orch:cephadm/smoke/{0-distro/centos_8.stream_container_tools_crun 0-nvme-loop agent/off fixed-2 mon_election/connectivity start} | 2 | |||
Failure Reason:
Error reimaging machines: Failed to power on smithi182 |
||||||||||||||
fail | 7116728 | 2022-12-14 20:00:53 | 2022-12-14 20:48:00 | 2022-12-14 21:39:58 | 0:51:58 | 0:38:44 | 0:13:14 | smithi | main | centos | 8.stream | orch:cephadm/thrash/{0-distro/centos_8.stream_container_tools_crun 1-start 2-thrash 3-tasks/snaps-few-objects fixed-2 msgr/async root} | 2 | |
Failure Reason:
"/var/log/ceph/878f3212-7bf3-11ed-8443-001a4aab830c/ceph-mon.c.log:2022-12-14T21:11:16.596+0000 7fe303562700 7 mon.c@2(synchronizing).log v60 update_from_paxos applying incremental log 59 2022-12-14T21:11:14.601608+0000 mon.a (mon.0) 175 : cluster [WRN] Health check failed: 1/3 mons down, quorum a,b (MON_DOWN)" in cluster log |
||||||||||||||
dead | 7116729 | 2022-12-14 20:01:01 | 2022-12-14 20:48:35 | 2022-12-14 21:21:15 | 0:32:40 | smithi | main | rhel | 8.6 | orch:cephadm/with-work/{0-distro/rhel_8.6_container_tools_rhel8 fixed-2 mode/root mon_election/connectivity msgr/async-v2only start tasks/rados_api_tests} | 2 | |||
Failure Reason:
Error reimaging machines: reached maximum tries (100) after waiting for 600 seconds |
||||||||||||||
dead | 7116730 | 2022-12-14 20:01:11 | 2022-12-14 20:49:14 | 2022-12-14 21:04:03 | 0:14:49 | 0:04:11 | 0:10:38 | smithi | main | ubuntu | 20.04 | orch:cephadm/smoke-roleless/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-services/rgw-ingress 3-final} | 2 | |
Failure Reason:
{'smithi154.front.sepia.ceph.com': {'changed': False, 'msg': 'All items completed', 'results': [{'_ansible_item_label': {'key': 'vg_nvme', 'value': {'pvs': '/dev/nvme0n1'}}, '_ansible_no_log': False, 'ansible_loop_var': 'item', 'changed': False, 'err': " /dev/vg_nvme: already exists in filesystem\n Run `vgcreate --help' for more information.\n", 'failed': True, 'invocation': {'module_args': {'force': False, 'pesize': '4', 'pv_options': '', 'pvresize': False, 'pvs': ['/dev/nvme0n1'], 'state': 'present', 'vg': 'vg_nvme', 'vg_options': ''}}, 'item': {'key': 'vg_nvme', 'value': {'pvs': '/dev/nvme0n1'}}, 'msg': "Creating volume group 'vg_nvme' failed", 'rc': 3}]}, 'smithi112.front.sepia.ceph.com': {'changed': False, 'msg': 'All items completed', 'results': [{'_ansible_item_label': {'key': 'vg_nvme', 'value': {'pvs': '/dev/nvme0n1'}}, '_ansible_no_log': False, 'ansible_loop_var': 'item', 'changed': False, 'err': " /dev/vg_nvme: already exists in filesystem\n Run `vgcreate --help' for more information.\n", 'failed': True, 'invocation': {'module_args': {'force': False, 'pesize': '4', 'pv_options': '', 'pvresize': False, 'pvs': ['/dev/nvme0n1'], 'state': 'present', 'vg': 'vg_nvme', 'vg_options': ''}}, 'item': {'key': 'vg_nvme', 'value': {'pvs': '/dev/nvme0n1'}}, 'msg': "Creating volume group 'vg_nvme' failed", 'rc': 3}]}} |
||||||||||||||
dead | 7116731 | 2022-12-14 20:01:13 | 2022-12-14 20:59:53 | 186 | smithi | main | centos | 8.stream | orch:cephadm/workunits/{0-distro/centos_8.stream_container_tools_crun agent/off mon_election/classic task/test_orch_cli} | 1 | ||||
Failure Reason:
{'smithi202.front.sepia.ceph.com': {'changed': False, 'msg': 'All items completed', 'results': [{'_ansible_item_label': {'key': 'vg_nvme', 'value': {'pvs': '/dev/nvme0n1'}}, '_ansible_no_log': False, 'ansible_loop_var': 'item', 'changed': False, 'err': " /dev/vg_nvme: already exists in filesystem\n Run `vgcreate --help' for more information.\n", 'failed': True, 'invocation': {'module_args': {'force': False, 'pesize': '4', 'pv_options': '', 'pvresize': False, 'pvs': ['/dev/nvme0n1'], 'state': 'present', 'vg': 'vg_nvme', 'vg_options': ''}}, 'item': {'key': 'vg_nvme', 'value': {'pvs': '/dev/nvme0n1'}}, 'msg': "Creating volume group 'vg_nvme' failed", 'rc': 3}]}} |
||||||||||||||
dead | 7116732 | 2022-12-14 20:01:14 | 2022-12-14 20:49:50 | 2022-12-14 21:03:45 | 0:13:55 | 0:03:39 | 0:10:16 | smithi | main | centos | 8.stream | orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} fail_fs/no overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn syntax} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-inline/no 4-verify} 2-client 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} | 2 | |
Failure Reason:
{'smithi093.front.sepia.ceph.com': {'changed': False, 'msg': 'All items completed', 'results': [{'_ansible_item_label': {'key': 'vg_nvme', 'value': {'pvs': '/dev/nvme0n1'}}, '_ansible_no_log': False, 'ansible_loop_var': 'item', 'changed': False, 'err': " /dev/vg_nvme: already exists in filesystem\n Run `vgcreate --help' for more information.\n", 'failed': True, 'invocation': {'module_args': {'force': False, 'pesize': '4', 'pv_options': '', 'pvresize': False, 'pvs': ['/dev/nvme0n1'], 'state': 'present', 'vg': 'vg_nvme', 'vg_options': ''}}, 'item': {'key': 'vg_nvme', 'value': {'pvs': '/dev/nvme0n1'}}, 'msg': "Creating volume group 'vg_nvme' failed", 'rc': 3}]}, 'smithi007.front.sepia.ceph.com': {'changed': False, 'msg': 'All items completed', 'results': [{'_ansible_item_label': {'key': 'vg_nvme', 'value': {'pvs': '/dev/nvme0n1'}}, '_ansible_no_log': False, 'ansible_loop_var': 'item', 'changed': False, 'err': " /dev/vg_nvme: already exists in filesystem\n Run `vgcreate --help' for more information.\n", 'failed': True, 'invocation': {'module_args': {'force': False, 'pesize': '4', 'pv_options': '', 'pvresize': False, 'pvs': ['/dev/nvme0n1'], 'state': 'present', 'vg': 'vg_nvme', 'vg_options': ''}}, 'item': {'key': 'vg_nvme', 'value': {'pvs': '/dev/nvme0n1'}}, 'msg': "Creating volume group 'vg_nvme' failed", 'rc': 3}]}} |
||||||||||||||
pass | 7116733 | 2022-12-14 20:01:15 | 2022-12-14 21:07:36 | 2022-12-14 21:34:09 | 0:26:33 | 0:12:18 | 0:14:15 | smithi | main | ubuntu | 20.04 | orch:cephadm/orchestrator_cli/{0-random-distro$/{ubuntu_20.04} 2-node-mgr agent/on orchestrator_cli} | 2 | |
pass | 7116734 | 2022-12-14 20:01:16 | 2022-12-14 21:07:47 | 2022-12-14 21:34:38 | 0:26:51 | 0:15:32 | 0:11:19 | smithi | main | centos | 8.stream | orch:cephadm/smoke-singlehost/{0-random-distro$/{centos_8.stream_container_tools_crun} 1-start 2-services/rgw 3-final} | 1 | |
fail | 7116735 | 2022-12-14 20:01:18 | 2022-12-14 21:08:57 | 2022-12-14 22:16:53 | 1:07:56 | 0:56:28 | 0:11:28 | smithi | main | centos | 8.stream | orch:cephadm/upgrade/{1-start-distro/1-start-centos_8.stream_container-tools 2-repo_digest/defaut 3-upgrade/staggered 4-wait 5-upgrade-ls agent/on mon_election/classic} | 2 | |
Failure Reason:
Command failed on smithi167 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v16.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 10d1026a-7bf6-11ed-8443-001a4aab830c -e sha1=bd53b5fa4e346bda7a32ffb27cc867da30f64894 -- bash -c \'ceph versions | jq -e \'"\'"\'.osd | length == 2\'"\'"\'\'' |
||||||||||||||
fail | 7116736 | 2022-12-14 20:01:24 | 2022-12-14 21:10:04 | 2022-12-14 21:57:22 | 0:47:18 | 0:31:58 | 0:15:20 | smithi | main | ubuntu | 20.04 | orch:cephadm/with-work/{0-distro/ubuntu_20.04 fixed-2 mode/packaged mon_election/classic msgr/async start tasks/rados_python} | 2 | |
Failure Reason:
"/var/log/ceph/7e9e1f44-7bf6-11ed-8443-001a4aab830c/ceph-mon.a.log:2022-12-14T21:49:59.997+0000 7f98d6fdf700 0 log_channel(cluster) log [WRN] : Health detail: HEALTH_WARN 1 pool(s) do not have an application enabled" in cluster log |
||||||||||||||
pass | 7116737 | 2022-12-14 20:01:35 | 2022-12-14 21:10:40 | 2022-12-14 21:37:32 | 0:26:52 | 0:17:36 | 0:09:16 | smithi | main | centos | 8.stream | orch:cephadm/smoke-roleless/{0-distro/centos_8.stream_container_tools 0-nvme-loop 1-start 2-services/rgw 3-final} | 2 | |
fail | 7116738 | 2022-12-14 20:01:45 | 2022-12-14 21:59:27 | 1836 | smithi | main | rhel | 8.6 | orch:cephadm/workunits/{0-distro/rhel_8.6_container_tools_3.0 agent/on mon_election/connectivity task/test_orch_cli_mon} | 5 | ||||
Failure Reason:
"/var/log/ceph/c9fe50c0-7bf7-11ed-8443-001a4aab830c/ceph-mon.a.log:2022-12-14T21:43:20.376+0000 7ff8faf6b700 0 log_channel(cluster) log [WRN] : Health check failed: 1/4 mons down, quorum a,e,c (MON_DOWN)" in cluster log |
||||||||||||||
fail | 7116739 | 2022-12-14 20:01:56 | 2022-12-14 21:14:39 | 2022-12-14 21:54:50 | 0:40:11 | 0:26:38 | 0:13:33 | smithi | main | centos | 8.stream | orch:cephadm/with-work/{0-distro/centos_8.stream_container_tools fixed-2 mode/root mon_election/connectivity msgr/async-v1only start tasks/rotate-keys} | 2 | |
Failure Reason:
"/var/log/ceph/41b34c66-7bf7-11ed-8443-001a4aab830c/ceph-mon.c.log:2022-12-14T21:40:13.515+0000 7f3f7f37c700 7 mon.c@2(peon).log v158 update_from_paxos applying incremental log 158 2022-12-14T21:40:12.500137+0000 mon.a (mon.0) 479 : cluster [WRN] Health check failed: Degraded data redundancy: 2/6 objects degraded (33.333%), 1 pg degraded (PG_DEGRADED)" in cluster log |
||||||||||||||
pass | 7116740 | 2022-12-14 20:02:08 | 2022-12-14 21:15:49 | 2022-12-14 21:43:05 | 0:27:16 | 0:16:24 | 0:10:52 | smithi | main | rhel | 8.6 | orch:cephadm/osds/{0-distro/rhel_8.6_container_tools_rhel8 0-nvme-loop 1-start 2-ops/repave-all} | 2 | |
fail | 7116741 | 2022-12-14 20:02:12 | 2022-12-14 21:17:12 | 2022-12-14 21:49:48 | 0:32:36 | 0:20:33 | 0:12:03 | smithi | main | rhel | 8.6 | orch:cephadm/smoke/{0-distro/rhel_8.6_container_tools_3.0 0-nvme-loop agent/on fixed-2 mon_election/classic start} | 2 | |
Failure Reason:
"/var/log/ceph/91c0fc4e-7bf7-11ed-8443-001a4aab830c/ceph-mon.c.log:2022-12-14T21:40:50.254+0000 7f0298e65700 7 mon.c@2(synchronizing).log v64 update_from_paxos applying incremental log 63 2022-12-14T21:40:48.271602+0000 mon.a (mon.0) 204 : cluster [WRN] Health check failed: 1/3 mons down, quorum a,b (MON_DOWN)" in cluster log |
||||||||||||||
pass | 7116742 | 2022-12-14 20:02:24 | 2022-12-14 21:19:25 | 2022-12-14 21:46:23 | 0:26:58 | 0:14:46 | 0:12:12 | smithi | main | centos | 8.stream | orch:cephadm/smoke-roleless/{0-distro/centos_8.stream_container_tools_crun 0-nvme-loop 1-start 2-services/basic 3-final} | 2 | |
fail | 7116743 | 2022-12-14 20:02:40 | 2022-12-15 03:47:44 | 2022-12-15 04:39:22 | 0:51:38 | 0:42:01 | 0:09:37 | smithi | main | rhel | 8.6 | orch:cephadm/thrash/{0-distro/rhel_8.6_container_tools_3.0 1-start 2-thrash 3-tasks/rados_api_tests fixed-2 msgr/async-v1only root} | 2 | |
Failure Reason:
"/var/log/ceph/67c7ec90-7c2e-11ed-8443-001a4aab830c/ceph-mon.c.log:2022-12-15T04:13:17.570+0000 7faf04f66700 7 mon.c@2(synchronizing).log v60 update_from_paxos applying incremental log 59 2022-12-15T04:13:15.577371+0000 mon.a (mon.0) 178 : cluster [WRN] Health check failed: 1/3 mons down, quorum a,b (MON_DOWN)" in cluster log |
||||||||||||||
fail | 7116744 | 2022-12-14 20:02:46 | 2022-12-15 03:48:19 | 2022-12-15 04:36:31 | 0:48:12 | 0:38:24 | 0:09:48 | smithi | main | centos | 8.stream | orch:cephadm/with-work/{0-distro/centos_8.stream_container_tools_crun fixed-2 mode/packaged mon_election/classic msgr/async-v2only start tasks/rados_api_tests} | 2 | |
Failure Reason:
"/var/log/ceph/e30bd610-7c2d-11ed-8443-001a4aab830c/ceph-mon.c.log:2022-12-15T04:20:00.396+0000 7f3eb0f72700 7 mon.c@2(peon).log v570 update_from_paxos applying incremental log 570 2022-12-15T04:20:00.000130+0000 mon.a (mon.0) 2426 : cluster [WRN] Health detail: HEALTH_WARN 1 pool(s) do not have an application enabled" in cluster log |
||||||||||||||
pass | 7116746 | 2022-12-14 20:02:52 | 2022-12-15 03:48:41 | 2022-12-15 04:13:37 | 0:24:56 | 0:15:44 | 0:09:12 | smithi | main | rhel | 8.6 | orch:cephadm/workunits/{0-distro/rhel_8.6_container_tools_rhel8 agent/off mon_election/classic task/test_adoption} | 1 | |
fail | 7116748 | 2022-12-14 20:03:06 | 2022-12-15 04:35:23 | 2121 | smithi | main | centos | 8.stream | orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} fail_fs/yes overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn syntax} roles tasks/{0-from/pacific 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-inline/yes 4-verify} 2-client 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} | 2 | ||||
Failure Reason:
"/var/log/ceph/449e565a-7c2e-11ed-8443-001a4aab830c/ceph-mon.smithi003.log:2022-12-15T04:19:59.999+0000 7f49dc857700 0 log_channel(cluster) log [WRN] : Health detail: HEALTH_WARN 1 filesystem with deprecated feature inline_data" in cluster log |
||||||||||||||
fail | 7116751 | 2022-12-14 20:03:12 | 2022-12-15 03:52:00 | 2022-12-15 04:32:42 | 0:40:42 | 0:29:31 | 0:11:11 | smithi | main | rhel | 8.6 | orch:cephadm/with-work/{0-distro/rhel_8.6_container_tools_3.0 fixed-2 mode/root mon_election/connectivity msgr/async start tasks/rados_python} | 2 | |
Failure Reason:
"/var/log/ceph/facab2de-7c2e-11ed-8443-001a4aab830c/ceph-mon.c.log:2022-12-15T04:17:33.914+0000 7f0e56be8700 7 mon.c@2(synchronizing).log v63 update_from_paxos applying incremental log 62 2022-12-15T04:17:31.931010+0000 mon.a (mon.0) 173 : cluster [WRN] Health check failed: 1/3 mons down, quorum a,b (MON_DOWN)" in cluster log |
||||||||||||||
pass | 7116753 | 2022-12-14 20:03:22 | 2022-12-15 04:24:24 | 1125 | smithi | main | ubuntu | 20.04 | orch:cephadm/workunits/{0-distro/ubuntu_20.04 agent/on mon_election/connectivity task/test_cephadm} | 1 | ||||
pass | 7116754 | 2022-12-14 20:03:28 | 2022-12-15 03:52:49 | 2022-12-15 04:21:49 | 0:29:00 | 0:17:23 | 0:11:37 | smithi | main | rhel | 8.6 | orch:cephadm/smoke-roleless/{0-distro/rhel_8.6_container_tools_3.0 0-nvme-loop 1-start 2-services/client-keyring 3-final} | 2 | |
fail | 7116755 | 2022-12-14 20:03:45 | 2022-12-15 03:53:24 | 2022-12-15 04:38:18 | 0:44:54 | 0:33:04 | 0:11:50 | smithi | main | centos | 8.stream | orch:cephadm/mgr-nfs-upgrade/{0-centos_8.stream_container_tools 1-bootstrap/16.2.5 1-start 2-nfs 3-upgrade-with-workload 4-final} | 2 | |
Failure Reason:
"/var/log/ceph/48ab2fca-7c2e-11ed-8443-001a4aab830c/ceph-mon.smithi132.log:2022-12-15T04:14:42.382+0000 7fe10e268700 0 log_channel(cluster) log [WRN] : Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log |
||||||||||||||
fail | 7116756 | 2022-12-14 20:03:50 | 2022-12-15 03:53:50 | 2022-12-15 04:32:42 | 0:38:52 | 0:29:11 | 0:09:41 | smithi | main | rhel | 8.6 | orch:cephadm/with-work/{0-distro/rhel_8.6_container_tools_rhel8 fixed-2 mode/packaged mon_election/classic msgr/async-v1only start tasks/rotate-keys} | 2 | |
Failure Reason:
"/var/log/ceph/2aaeb55e-7c2f-11ed-8443-001a4aab830c/ceph-mon.c.log:2022-12-15T04:20:38.440+0000 7f297177f700 7 mon.c@2(peon).log v174 update_from_paxos applying incremental log 174 2022-12-15T04:20:38.111898+0000 mon.a (mon.0) 528 : cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log |
||||||||||||||
pass | 7116757 | 2022-12-14 20:03:51 | 2022-12-15 03:54:17 | 2022-12-15 04:15:21 | 0:21:04 | 0:09:17 | 0:11:47 | smithi | main | centos | 8.stream | orch:cephadm/workunits/{0-distro/centos_8.stream_container_tools agent/off mon_election/classic task/test_cephadm_repos} | 1 | |
pass | 7116758 | 2022-12-14 20:04:07 | 2022-12-15 03:57:16 | 2022-12-15 04:23:02 | 0:25:46 | 0:16:41 | 0:09:05 | smithi | main | rhel | 8.6 | orch:cephadm/smoke-roleless/{0-distro/rhel_8.6_container_tools_rhel8 0-nvme-loop 1-start 2-services/iscsi 3-final} | 2 | |
fail | 7116759 | 2022-12-14 20:04:14 | 2022-12-15 03:57:52 | 2022-12-15 04:25:48 | 0:27:56 | 0:18:04 | 0:09:52 | smithi | main | rhel | 8.6 | orch:cephadm/smoke/{0-distro/rhel_8.6_container_tools_rhel8 0-nvme-loop agent/off fixed-2 mon_election/connectivity start} | 2 | |
Failure Reason:
"/var/log/ceph/fed5b2b6-7c2e-11ed-8443-001a4aab830c/ceph-mon.c.log:2022-12-15T04:17:06.739+0000 7fae76baf700 7 mon.c@2(synchronizing).log v58 update_from_paxos applying incremental log 57 2022-12-15T04:17:04.742914+0000 mon.a (mon.0) 174 : cluster [WRN] Health check failed: 1/3 mons down, quorum a,b (MON_DOWN)" in cluster log |
||||||||||||||
fail | 7116760 | 2022-12-14 20:04:19 | 2022-12-15 03:58:57 | 2022-12-15 05:07:16 | 1:08:19 | 0:55:25 | 0:12:54 | smithi | main | rhel | 8.6 | orch:cephadm/thrash/{0-distro/rhel_8.6_container_tools_rhel8 1-start 2-thrash 3-tasks/radosbench fixed-2 msgr/async-v2only root} | 2 | |
Failure Reason:
"/var/log/ceph/4ebaab6e-7c30-11ed-8443-001a4aab830c/ceph-mon.c.log:2022-12-15T04:43:17.131+0000 7f31cd02a700 7 mon.c@2(peon).log v922 update_from_paxos applying incremental log 922 2022-12-15T04:43:16.111855+0000 mon.a (mon.0) 1736 : cluster [ERR] Health check failed: full ratio(s) out of order (OSD_OUT_OF_ORDER_FULL)" in cluster log |
||||||||||||||
dead | 7116761 | 2022-12-14 20:04:35 | 2022-12-15 04:02:59 | 2022-12-15 04:06:17 | 0:03:18 | smithi | main | ubuntu | 20.04 | orch:cephadm/with-work/{0-distro/ubuntu_20.04 fixed-2 mode/root mon_election/connectivity msgr/async-v2only start tasks/rados_api_tests} | 2 | |||
Failure Reason:
Error reimaging machines: Failed to power on smithi105 |
||||||||||||||
fail | 7116762 | 2022-12-14 20:04:42 | 2022-12-15 04:03:19 | 2022-12-15 04:45:49 | 0:42:30 | 0:34:01 | 0:08:29 | smithi | main | centos | 8.stream | orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} fail_fs/no overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn syntax} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-inline/no 4-verify} 2-client 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} | 2 | |
Failure Reason:
"/var/log/ceph/e8df9fa2-7c2f-11ed-8443-001a4aab830c/ceph-mon.smithi036.log:2022-12-15T04:27:00.785+0000 7fc34645b700 0 log_channel(cluster) log [WRN] : Health check failed: insufficient standby MDS daemons available (MDS_INSUFFICIENT_STANDBY)" in cluster log |
||||||||||||||
fail | 7116763 | 2022-12-14 20:04:58 | 2022-12-15 04:03:30 | 2022-12-15 04:50:38 | 0:47:08 | 0:33:49 | 0:13:19 | smithi | main | ubuntu | 20.04 | orch:cephadm/upgrade/{1-start-distro/1-start-ubuntu_20.04 2-repo_digest/repo_digest 3-upgrade/simple 4-wait 5-upgrade-ls agent/off mon_election/connectivity} | 2 | |
Failure Reason:
"/var/log/ceph/a49b1fba-7c2f-11ed-8443-001a4aab830c/ceph-mon.a.log:2022-12-15T04:25:15.680+0000 7f649c596700 0 log_channel(cluster) log [WRN] : Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log |
||||||||||||||
fail | 7116764 | 2022-12-14 20:05:04 | 2022-12-15 04:04:21 | 2022-12-15 04:38:56 | 0:34:35 | 0:18:16 | 0:16:19 | smithi | main | ubuntu | 20.04 | orch:cephadm/osds/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-ops/rm-zap-add} | 2 | |
Failure Reason:
"/var/log/ceph/39b16910-7c30-11ed-8443-001a4aab830c/ceph-mon.smithi061.log:2022-12-15T04:33:23.737+0000 7f20bc815700 0 log_channel(cluster) log [WRN] : Health check failed: 1 osds down (OSD_DOWN)" in cluster log |
||||||||||||||
fail | 7116765 | 2022-12-14 20:05:15 | 2022-12-15 04:07:04 | 2022-12-15 04:32:25 | 0:25:21 | 0:16:30 | 0:08:51 | smithi | main | centos | 8.stream | orch:cephadm/workunits/{0-distro/centos_8.stream_container_tools_crun agent/on mon_election/connectivity task/test_iscsi_pids_limit/{centos_8.stream_container_tools test_iscsi_pids_limit}} | 1 | |
Failure Reason:
"/var/log/ceph/7b4988f8-7c30-11ed-8443-001a4aab830c/ceph-mon.a.log:2022-12-15T04:28:28.281+0000 7f154a72f700 0 log_channel(cluster) log [WRN] : Health check failed: 1 failed cephadm daemon(s) (CEPHADM_FAILED_DAEMON)" in cluster log |
||||||||||||||
pass | 7116766 | 2022-12-14 20:05:22 | 2022-12-15 04:07:05 | 2022-12-15 04:43:06 | 0:36:01 | 0:25:57 | 0:10:04 | smithi | main | centos | 8.stream | orch:cephadm/with-work/{0-distro/centos_8.stream_container_tools fixed-2 mode/packaged mon_election/classic msgr/async start tasks/rados_python} | 2 | |
fail | 7116767 | 2022-12-14 20:05:28 | 2022-12-15 04:07:05 | 2022-12-15 04:38:20 | 0:31:15 | 0:19:13 | 0:12:02 | smithi | main | ubuntu | 20.04 | orch:cephadm/smoke-roleless/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-services/jaeger 3-final} | 2 | |
Failure Reason:
"/var/log/ceph/1814b2da-7c30-11ed-8443-001a4aab830c/ceph-mon.smithi105.log:2022-12-15T04:32:05.956+0000 7f2a8d912700 0 log_channel(cluster) log [WRN] : Health check failed: 1 failed cephadm daemon(s) (CEPHADM_FAILED_DAEMON)" in cluster log |
||||||||||||||
fail | 7116768 | 2022-12-14 20:05:41 | 2022-12-15 04:07:05 | 2022-12-15 04:47:17 | 0:40:12 | 0:26:50 | 0:13:22 | smithi | main | centos | 8.stream | orch:cephadm/with-work/{0-distro/centos_8.stream_container_tools_crun fixed-2 mode/root mon_election/connectivity msgr/async-v1only start tasks/rotate-keys} | 2 | |
Failure Reason:
"/var/log/ceph/fc991bbc-7c30-11ed-8443-001a4aab830c/ceph-mon.c.log:2022-12-15T04:33:43.484+0000 7f5715cdf700 7 mon.c@2(peon).log v169 update_from_paxos applying incremental log 169 2022-12-15T04:33:42.488772+0000 mon.a (mon.0) 524 : cluster [WRN] Health check failed: Degraded data redundancy: 2/6 objects degraded (33.333%), 1 pg degraded (PG_DEGRADED)" in cluster log |
||||||||||||||
fail | 7116769 | 2022-12-14 20:05:45 | 2022-12-15 04:09:50 | 2022-12-15 04:51:00 | 0:41:10 | 0:31:29 | 0:09:41 | smithi | main | rhel | 8.6 | orch:cephadm/workunits/{0-distro/rhel_8.6_container_tools_3.0 agent/off mon_election/classic task/test_nfs} | 1 | |
Failure Reason:
"/var/log/ceph/058bfb04-7c31-11ed-8443-001a4aab830c/ceph-mon.a.log:2022-12-15T04:34:41.585+0000 7f2bae818700 0 log_channel(cluster) log [WRN] : Replacing daemon mds.a.smithi098.myhbqp as rank 0 with standby daemon mds.user_test_fs.smithi098.exozhg" in cluster log |
||||||||||||||
pass | 7116770 | 2022-12-14 20:05:46 | 2022-12-15 04:10:11 | 2022-12-15 04:34:42 | 0:24:31 | 0:16:05 | 0:08:26 | smithi | main | centos | 8.stream | orch:cephadm/smoke-roleless/{0-distro/centos_8.stream_container_tools 0-nvme-loop 1-start 2-services/mirror 3-final} | 2 | |
fail | 7116771 | 2022-12-14 20:05:52 | 2022-12-15 04:10:13 | 2022-12-15 04:47:17 | 0:37:04 | 0:24:19 | 0:12:45 | smithi | main | ubuntu | 20.04 | orch:cephadm/smoke/{0-distro/ubuntu_20.04 0-nvme-loop agent/on fixed-2 mon_election/classic start} | 2 | |
Failure Reason:
"/var/log/ceph/829cb490-7c30-11ed-8443-001a4aab830c/ceph-mon.a.log:2022-12-15T04:30:27.589+0000 7f7a39fc9700 0 log_channel(cluster) log [WRN] : Health check failed: 1 stray daemon(s) not managed by cephadm (CEPHADM_STRAY_DAEMON)" in cluster log |
||||||||||||||
fail | 7116772 | 2022-12-14 20:06:02 | 2022-12-15 04:10:14 | 2022-12-15 05:06:49 | 0:56:35 | 0:43:41 | 0:12:54 | smithi | main | ubuntu | 20.04 | orch:cephadm/thrash/{0-distro/ubuntu_20.04 1-start 2-thrash 3-tasks/small-objects fixed-2 msgr/async root} | 2 | |
Failure Reason:
"/var/log/ceph/e4cd6394-7c30-11ed-8443-001a4aab830c/ceph-mon.a.log:2022-12-15T04:45:35.597+0000 7f9ac7114700 0 log_channel(cluster) log [WRN] : Health check failed: noscrub flag(s) set (OSDMAP_FLAGS)" in cluster log |
||||||||||||||
fail | 7116773 | 2022-12-14 20:06:08 | 2022-12-15 04:10:19 | 2022-12-15 05:02:41 | 0:52:22 | 0:40:04 | 0:12:18 | smithi | main | rhel | 8.6 | orch:cephadm/with-work/{0-distro/rhel_8.6_container_tools_3.0 fixed-2 mode/packaged mon_election/classic msgr/async-v2only start tasks/rados_api_tests} | 2 | |
Failure Reason:
"/var/log/ceph/891a4250-7c31-11ed-8443-001a4aab830c/ceph-mon.c.log:2022-12-15T04:50:00.994+0000 7f9406b83700 7 mon.c@2(peon).log v749 update_from_paxos applying incremental log 749 2022-12-15T04:50:00.000178+0000 mon.a (mon.0) 2646 : cluster [WRN] Health detail: HEALTH_WARN 1 pool(s) do not have an application enabled" in cluster log |
||||||||||||||
fail | 7116774 | 2022-12-14 20:06:14 | 2022-12-15 04:11:47 | 2022-12-15 04:58:14 | 0:46:27 | 0:34:02 | 0:12:25 | smithi | main | centos | 8.stream | orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} fail_fs/yes overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn syntax} roles tasks/{0-from/pacific 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-inline/yes 4-verify} 2-client 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} | 2 | |
Failure Reason:
"/var/log/ceph/9383c1da-7c31-11ed-8443-001a4aab830c/ceph-mon.smithi102.log:2022-12-15T04:39:59.998+0000 7f99e55f0700 0 log_channel(cluster) log [WRN] : Health detail: HEALTH_WARN 1 filesystem with deprecated feature inline_data" in cluster log |
||||||||||||||
fail | 7116775 | 2022-12-14 20:06:29 | 2022-12-15 04:12:57 | 2022-12-15 04:41:01 | 0:28:04 | 0:18:55 | 0:09:09 | smithi | main | rhel | 8.6 | orch:cephadm/workunits/{0-distro/rhel_8.6_container_tools_rhel8 agent/on mon_election/connectivity task/test_orch_cli} | 1 | |
Failure Reason:
"/var/log/ceph/5140d402-7c31-11ed-8443-001a4aab830c/ceph-mon.a.log:2022-12-15T04:34:09.350+0000 7f176400e700 0 log_channel(cluster) log [WRN] : Health check failed: 1 stray daemon(s) not managed by cephadm (CEPHADM_STRAY_DAEMON)" in cluster log |
||||||||||||||
pass | 7116776 | 2022-12-14 20:06:36 | 2022-12-15 04:13:02 | 2022-12-15 04:41:28 | 0:28:26 | 0:17:21 | 0:11:05 | smithi | main | centos | 8.stream | orch:cephadm/smoke-roleless/{0-distro/centos_8.stream_container_tools_crun 0-nvme-loop 1-start 2-services/nfs-ingress-rgw-bucket 3-final} | 2 | |
pass | 7116777 | 2022-12-14 20:06:47 | 2022-12-15 04:13:53 | 2022-12-15 04:51:59 | 0:38:06 | 0:27:55 | 0:10:11 | smithi | main | rhel | 8.6 | orch:cephadm/with-work/{0-distro/rhel_8.6_container_tools_rhel8 fixed-2 mode/root mon_election/connectivity msgr/async start tasks/rados_python} | 2 | |
fail | 7116778 | 2022-12-14 20:06:53 | 2022-12-15 04:14:08 | 2022-12-15 04:39:11 | 0:25:03 | 0:15:21 | 0:09:42 | smithi | main | centos | 8.stream | orch:cephadm/osds/{0-distro/centos_8.stream_container_tools 0-nvme-loop 1-start 2-ops/rm-zap-flag} | 2 | |
Failure Reason:
"/var/log/ceph/1f59d3f8-7c31-11ed-8443-001a4aab830c/ceph-mon.smithi032.log:2022-12-15T04:35:19.256+0000 7f2d6b34d700 0 log_channel(cluster) log [WRN] : Health check failed: 1 osds down (OSD_DOWN)" in cluster log |
||||||||||||||
fail | 7116779 | 2022-12-14 20:06:59 | 2022-12-15 04:14:18 | 2022-12-15 05:06:35 | 0:52:17 | 0:36:35 | 0:15:42 | smithi | main | ubuntu | 20.04 | orch:cephadm/workunits/{0-distro/ubuntu_20.04 agent/off mon_election/classic task/test_orch_cli_mon} | 5 | |
Failure Reason:
"/var/log/ceph/d1016f80-7c31-11ed-8443-001a4aab830c/ceph-mon.a.log:2022-12-15T04:39:35.773+0000 7f99e5704700 0 log_channel(cluster) log [WRN] : Health check failed: 1/4 mons down, quorum a,e,c (MON_DOWN)" in cluster log |