User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Pass | Fail | Dead |
---|---|---|---|---|---|---|---|---|---|---|---|
adking | 2024-04-22 22:45:33 | 2024-04-22 22:52:39 | 2024-04-24 11:40:33 | 1 day, 12:47:54 | orch:cephadm | wip-adk-testing-2024-04-22-1618 | smithi | 43be020 | 11 | 83 | 5 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
fail | 7669168 | 2024-04-22 22:45:39 | 2024-04-22 22:49:56 | 2024-04-22 23:33:43 | 0:43:47 | 0:28:03 | 0:15:44 | smithi | main | ubuntu | 22.04 | orch:cephadm/smoke-roleless/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-services/nfs-ingress-rgw-user 3-final} | 2 | |
Failure Reason:
"2024-04-22T23:14:27.340558+0000 mon.smithi027 (mon.0) 121 : cluster [WRN] Health check failed: failed to probe daemons or devices (CEPHADM_REFRESH_FAILED)" in cluster log |
||||||||||||||
fail | 7669169 | 2024-04-22 22:45:40 | 2024-04-22 22:52:37 | 2024-04-22 23:09:44 | 0:17:07 | 0:09:36 | 0:07:31 | smithi | main | centos | 9.stream | orch:cephadm/workunits/{0-distro/centos_9.stream agent/on mon_election/connectivity task/test_host_drain} | 3 | |
Failure Reason:
Command failed on smithi045 with status 2: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:43be020184947e53516056c9931e1ac5bdbbb1a5 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid b0aceeb6-00fc-11ef-bc93-c7b262605968 -- ceph-volume lvm zap /dev/vg_nvme/lv_4' |
||||||||||||||
fail | 7669170 | 2024-04-22 22:45:41 | 2024-04-22 22:52:38 | 2024-04-22 23:32:39 | 0:40:01 | 0:31:48 | 0:08:13 | smithi | main | centos | 9.stream | orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mgr mon osd} fail_fs/no overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn pg_health syntax} roles tasks/{0-from/quincy 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-inline/no 4-verify} 2-client/kclient 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} | 2 | |
Failure Reason:
"2024-04-22T23:20:00.000167+0000 mon.smithi055 (mon.0) 269 : cluster [WRN] Health detail: HEALTH_WARN Degraded data redundancy: 43/219 objects degraded (19.635%), 18 pgs degraded" in cluster log |
||||||||||||||
pass | 7669171 | 2024-04-22 22:45:42 | 2024-04-22 22:52:38 | 2024-04-22 23:34:26 | 0:41:48 | 0:34:23 | 0:07:25 | smithi | main | centos | 9.stream | orch:cephadm/mgr-nfs-upgrade/{0-centos_9.stream 1-bootstrap/17.2.0 1-start 2-nfs 3-upgrade-with-workload 4-final} | 2 | |
fail | 7669172 | 2024-04-22 22:45:43 | 2024-04-22 22:52:38 | 2024-04-22 23:17:07 | 0:24:29 | 0:12:40 | 0:11:49 | smithi | main | ubuntu | 22.04 | orch:cephadm/nfs/{cluster/{1-node} conf/{client mds mgr mon osd} overrides/{ignorelist_health pg_health} supported-random-distros$/{ubuntu_latest} tasks/nfs} | 1 | |
Failure Reason:
"2024-04-22T23:14:08.025042+0000 mon.a (mon.0) 101 : cluster [WRN] Health check failed: failed to probe daemons or devices (CEPHADM_REFRESH_FAILED)" in cluster log |
||||||||||||||
fail | 7669173 | 2024-04-22 22:45:45 | 2024-04-22 22:52:39 | 2024-04-22 23:07:25 | 0:14:46 | 0:08:33 | 0:06:13 | smithi | main | centos | 9.stream | orch:cephadm/no-agent-workunits/{0-distro/centos_9.stream mon_election/classic task/test_orch_cli} | 1 | |
Failure Reason:
Command failed on smithi081 with status 2: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:43be020184947e53516056c9931e1ac5bdbbb1a5 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 958663d8-00fc-11ef-bc93-c7b262605968 -- ceph-volume lvm zap /dev/vg_nvme/lv_4' |
||||||||||||||
pass | 7669174 | 2024-04-22 22:45:46 | 2024-04-22 22:52:39 | 2024-04-22 23:16:18 | 0:23:39 | 0:14:23 | 0:09:16 | smithi | main | ubuntu | 22.04 | orch:cephadm/orchestrator_cli/{0-random-distro$/{ubuntu_22.04} 2-node-mgr agent/off orchestrator_cli} | 2 | |
fail | 7669175 | 2024-04-22 22:45:47 | 2024-04-22 22:52:40 | 2024-04-22 23:30:11 | 0:37:31 | 0:27:52 | 0:09:39 | smithi | main | ubuntu | 22.04 | orch:cephadm/osds/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-ops/rm-zap-flag} | 2 | |
Failure Reason:
"2024-04-22T23:12:00.176572+0000 mon.smithi092 (mon.0) 121 : cluster [WRN] Health check failed: failed to probe daemons or devices (CEPHADM_REFRESH_FAILED)" in cluster log |
||||||||||||||
fail | 7669176 | 2024-04-22 22:45:48 | 2024-04-22 22:52:40 | 2024-04-22 23:08:31 | 0:15:51 | 0:07:01 | 0:08:50 | smithi | main | centos | 9.stream | orch:cephadm/rbd_iscsi/{0-single-container-host base/install cluster/{fixed-3 openstack} conf/{disable-pool-app} workloads/cephadm_iscsi} | 3 | |
Failure Reason:
Command failed on smithi082 with status 2: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:43be020184947e53516056c9931e1ac5bdbbb1a5 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 74851f76-00fc-11ef-bc93-c7b262605968 -- ceph-volume lvm zap /dev/vg_nvme/lv_4' |
||||||||||||||
fail | 7669177 | 2024-04-22 22:45:49 | 2024-04-22 22:52:40 | 2024-04-22 23:07:58 | 0:15:18 | 0:08:31 | 0:06:47 | smithi | main | centos | 9.stream | orch:cephadm/smb/{0-distro/centos_9.stream tasks/deploy_smb_basic} | 2 | |
Failure Reason:
Command failed on smithi139 with status 2: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:43be020184947e53516056c9931e1ac5bdbbb1a5 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid b1d6c92e-00fc-11ef-bc93-c7b262605968 -- ceph-volume lvm zap /dev/vg_nvme/lv_4' |
||||||||||||||
fail | 7669178 | 2024-04-22 22:45:50 | 2024-04-22 22:52:41 | 2024-04-22 23:06:30 | 0:13:49 | 0:06:37 | 0:07:12 | smithi | main | centos | 9.stream | orch:cephadm/smoke-roleless/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-services/nfs-ingress 3-final} | 2 | |
Failure Reason:
Command failed on smithi191 with status 125: "sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:43be020184947e53516056c9931e1ac5bdbbb1a5 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 6e3e3aee-00fc-11ef-bc93-c7b262605968 -- ceph orch apply mon '2;smithi183:172.21.15.183=smithi183;smithi191:172.21.15.191=smithi191'" |
||||||||||||||
fail | 7669179 | 2024-04-22 22:45:51 | 2024-04-22 22:52:41 | 2024-04-22 23:05:52 | 0:13:11 | 0:06:34 | 0:06:37 | smithi | main | centos | 9.stream | orch:cephadm/smoke-singlehost/{0-random-distro$/{centos_9.stream} 1-start 2-services/basic 3-final} | 1 | |
Failure Reason:
Command failed on smithi029 with status 2: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:43be020184947e53516056c9931e1ac5bdbbb1a5 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 587fd4e2-00fc-11ef-bc93-c7b262605968 -- ceph-volume lvm zap /dev/vg_nvme/lv_4' |
||||||||||||||
pass | 7669180 | 2024-04-22 22:45:52 | 2024-04-22 22:52:41 | 2024-04-22 23:10:20 | 0:17:39 | 0:10:14 | 0:07:25 | smithi | main | centos | 9.stream | orch:cephadm/smoke-small/{0-distro/centos_9.stream_runc 0-nvme-loop agent/off fixed-2 mon_election/classic start} | 3 | |
pass | 7669181 | 2024-04-22 22:45:53 | 2024-04-22 22:52:42 | 2024-04-22 23:48:40 | 0:55:58 | 0:48:07 | 0:07:51 | smithi | main | centos | 9.stream | orch:cephadm/upgrade/{1-start-distro/1-start-centos_9.stream 2-repo_digest/defaut 3-upgrade/staggered 4-wait 5-upgrade-ls agent/off mon_election/classic} | 2 | |
dead | 7669182 | 2024-04-22 22:45:54 | 2024-04-22 22:52:42 | 2024-04-22 23:04:45 | 0:12:03 | 0:03:30 | 0:08:33 | smithi | main | centos | 9.stream | orch:cephadm/thrash/{0-distro/centos_9.stream_runc 1-start 2-thrash 3-tasks/snaps-few-objects fixed-2 msgr/async-v2only root} | 2 | |
Failure Reason:
['Failed to manage policy for boolean nagios_run_sudo: [Errno 11] Resource temporarily unavailable'] |
||||||||||||||
fail | 7669183 | 2024-04-22 22:45:55 | 2024-04-22 22:52:43 | 2024-04-22 23:08:13 | 0:15:30 | 0:05:38 | 0:09:52 | smithi | main | centos | 9.stream | orch:cephadm/with-work/{0-distro/centos_9.stream_runc fixed-2 mode/root mon_election/connectivity msgr/async start tasks/rados_python} | 2 | |
Failure Reason:
SSH connection to smithi175 was lost: 'sudo yum -y install ceph-radosgw ceph-test ceph ceph-base cephadm ceph-immutable-object-cache ceph-mgr ceph-mgr-dashboard ceph-mgr-diskprediction-local ceph-mgr-rook ceph-mgr-cephadm ceph-fuse ceph-volume librados-devel libcephfs2 libcephfs-devel librados2 librbd1 python3-rados python3-rgw python3-cephfs python3-rbd rbd-fuse rbd-mirror rbd-nbd python3-pytest python3-pytest python3-pytest python3-pytest' |
||||||||||||||
fail | 7669184 | 2024-04-22 22:45:56 | 2024-04-22 22:52:43 | 2024-04-22 23:08:13 | 0:15:30 | 0:08:33 | 0:06:57 | smithi | main | centos | 9.stream | orch:cephadm/workunits/{0-distro/centos_9.stream_runc agent/off mon_election/classic task/test_iscsi_container/{centos_9.stream test_iscsi_container}} | 1 | |
Failure Reason:
Command failed on smithi129 with status 2: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:43be020184947e53516056c9931e1ac5bdbbb1a5 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid b3873db2-00fc-11ef-bc93-c7b262605968 -- ceph-volume lvm zap /dev/vg_nvme/lv_4' |
||||||||||||||
dead | 7669185 | 2024-04-22 22:45:57 | 2024-04-22 22:52:43 | 2024-04-23 11:02:38 | 12:09:55 | smithi | main | centos | 9.stream | orch:cephadm/smoke-roleless/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-services/nfs-ingress2 3-final} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 7669186 | 2024-04-22 22:45:58 | 2024-04-22 22:54:34 | 2024-04-22 23:32:25 | 0:37:51 | 0:27:36 | 0:10:15 | smithi | main | ubuntu | 22.04 | orch:cephadm/smoke-roleless/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-services/nfs-keepalive-only 3-final} | 2 | |
Failure Reason:
"2024-04-22T23:14:06.859784+0000 mon.smithi078 (mon.0) 121 : cluster [WRN] Health check failed: failed to probe daemons or devices (CEPHADM_REFRESH_FAILED)" in cluster log |
||||||||||||||
fail | 7669187 | 2024-04-22 22:45:59 | 2024-04-22 22:54:34 | 2024-04-22 23:07:49 | 0:13:15 | 0:06:34 | 0:06:41 | smithi | main | centos | 9.stream | orch:cephadm/osds/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-ops/rm-zap-wait} | 2 | |
Failure Reason:
Command failed on smithi148 with status 125: "sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:43be020184947e53516056c9931e1ac5bdbbb1a5 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 9ac59ea4-00fc-11ef-bc93-c7b262605968 -- ceph orch apply mon '2;smithi131:172.21.15.131=smithi131;smithi148:172.21.15.148=smithi148'" |
||||||||||||||
fail | 7669188 | 2024-04-22 22:46:00 | 2024-04-22 22:54:45 | 2024-04-22 23:19:10 | 0:24:25 | 0:12:05 | 0:12:20 | smithi | main | ubuntu | 22.04 | orch:cephadm/smoke/{0-distro/ubuntu_22.04 0-nvme-loop agent/on fixed-2 mon_election/connectivity start} | 2 | |
Failure Reason:
Command failed on smithi089 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:43be020184947e53516056c9931e1ac5bdbbb1a5 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid fa1eee04-00fd-11ef-bc93-c7b262605968 -- ceph orch daemon add osd smithi089:/dev/nvme4n1' |
||||||||||||||
fail | 7669189 | 2024-04-22 22:46:01 | 2024-04-22 22:57:25 | 2024-04-22 23:21:07 | 0:23:42 | 0:14:09 | 0:09:33 | smithi | main | ubuntu | 22.04 | orch:cephadm/workunits/{0-distro/ubuntu_22.04 agent/on mon_election/connectivity task/test_monitoring_stack_basic} | 3 | |
Failure Reason:
Command failed on smithi033 with status 2: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:43be020184947e53516056c9931e1ac5bdbbb1a5 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 367dd694-00fe-11ef-bc93-c7b262605968 -- ceph-volume lvm zap /dev/vg_nvme/lv_4' |
||||||||||||||
fail | 7669190 | 2024-04-22 22:46:02 | 2024-04-22 22:57:26 | 2024-04-23 00:14:15 | 1:16:49 | 1:06:55 | 0:09:54 | smithi | main | centos | 9.stream | orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mgr mon osd} fail_fs/yes overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn pg_health syntax} roles tasks/{0-from/reef/{v18.2.1} 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-inline/yes 4-verify} 2-client/fuse 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} | 2 | |
Failure Reason:
reached maximum tries (51) after waiting for 300 seconds |
||||||||||||||
fail | 7669191 | 2024-04-22 22:46:03 | 2024-04-22 23:01:17 | 2024-04-22 23:14:17 | 0:13:00 | 0:06:01 | 0:06:59 | smithi | main | centos | 9.stream | orch:cephadm/smoke-roleless/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-services/nfs 3-final} | 2 | |
Failure Reason:
Command failed on smithi097 with status 125: "sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:43be020184947e53516056c9931e1ac5bdbbb1a5 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 8a055d4c-00fd-11ef-bc93-c7b262605968 -- ceph orch apply mon '2;smithi086:172.21.15.86=smithi086;smithi097:172.21.15.97=smithi097'" |
||||||||||||||
fail | 7669192 | 2024-04-22 22:46:04 | 2024-04-22 23:01:17 | 2024-04-22 23:22:11 | 0:20:54 | 0:10:20 | 0:10:34 | smithi | main | centos | 9.stream | orch:cephadm/no-agent-workunits/{0-distro/centos_9.stream_runc mon_election/connectivity task/test_orch_cli_mon} | 5 | |
Failure Reason:
Command failed on smithi026 with status 2: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:43be020184947e53516056c9931e1ac5bdbbb1a5 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 7875b7e2-00fe-11ef-bc93-c7b262605968 -- ceph-volume lvm zap /dev/vg_nvme/lv_4' |
||||||||||||||
fail | 7669193 | 2024-04-22 22:46:05 | 2024-04-22 23:05:18 | 2024-04-22 23:18:31 | 0:13:13 | 0:06:53 | 0:06:20 | smithi | main | centos | 9.stream | orch:cephadm/smb/{0-distro/centos_9.stream_runc tasks/deploy_smb_domain} | 2 | |
Failure Reason:
Command failed on smithi043 with status 2: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:43be020184947e53516056c9931e1ac5bdbbb1a5 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 3482b94a-00fe-11ef-bc93-c7b262605968 -- ceph-volume lvm zap /dev/vg_nvme/lv_4' |
||||||||||||||
fail | 7669194 | 2024-04-22 22:46:06 | 2024-04-22 23:05:19 | 2024-04-22 23:19:47 | 0:14:28 | 0:06:56 | 0:07:32 | smithi | main | centos | 9.stream | orch:cephadm/smoke-roleless/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-services/nfs2 3-final} | 2 | |
Failure Reason:
Command failed on smithi044 with status 125: "sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:43be020184947e53516056c9931e1ac5bdbbb1a5 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 4c78f834-00fe-11ef-bc93-c7b262605968 -- ceph orch apply mon '2;smithi031:172.21.15.31=smithi031;smithi044:172.21.15.44=smithi044'" |
||||||||||||||
fail | 7669195 | 2024-04-22 22:46:07 | 2024-04-22 23:06:59 | 2024-04-22 23:31:00 | 0:24:01 | 0:14:03 | 0:09:58 | smithi | main | ubuntu | 22.04 | orch:cephadm/thrash/{0-distro/ubuntu_22.04 1-start 2-thrash 3-tasks/rados_api_tests fixed-2 msgr/async root} | 2 | |
Failure Reason:
"2024-04-22T23:28:55.494851+0000 mon.a (mon.0) 101 : cluster [WRN] Health check failed: failed to probe daemons or devices (CEPHADM_REFRESH_FAILED)" in cluster log |
||||||||||||||
fail | 7669196 | 2024-04-22 22:46:08 | 2024-04-22 23:07:00 | 2024-04-22 23:31:30 | 0:24:30 | 0:13:43 | 0:10:47 | smithi | main | ubuntu | 22.04 | orch:cephadm/with-work/{0-distro/ubuntu_22.04 fixed-2 mode/packaged mon_election/classic msgr/async-v1only start tasks/rotate-keys} | 2 | |
Failure Reason:
"2024-04-22T23:28:38.806256+0000 mon.a (mon.0) 104 : cluster [WRN] Health check failed: failed to probe daemons or devices (CEPHADM_REFRESH_FAILED)" in cluster log |
||||||||||||||
fail | 7669197 | 2024-04-22 22:46:10 | 2024-04-22 23:08:00 | 2024-04-24 11:40:33 | 1 day, 12:32:33 | 22:15:31 | 14:17:02 | smithi | main | centos | 9.stream | orch:cephadm/workunits/{0-distro/centos_9.stream agent/off mon_election/classic task/test_rgw_multisite} | 3 | |
Failure Reason:
"2024-04-23T14:32:53.427924+0000 mon.a (mon.0) 529 : cluster [WRN] Health check failed: cephadm background work is paused (CEPHADM_PAUSED)" in cluster log |
||||||||||||||
fail | 7669198 | 2024-04-22 22:46:11 | 2024-04-22 23:08:01 | 2024-04-22 23:45:38 | 0:37:37 | 0:28:04 | 0:09:33 | smithi | main | ubuntu | 22.04 | orch:cephadm/smoke-roleless/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-services/nvmeof 3-final} | 2 | |
Failure Reason:
"2024-04-22T23:26:51.068182+0000 mon.smithi139 (mon.0) 118 : cluster [WRN] Health check failed: failed to probe daemons or devices (CEPHADM_REFRESH_FAILED)" in cluster log |
||||||||||||||
fail | 7669199 | 2024-04-22 22:46:12 | 2024-04-22 23:08:01 | 2024-04-22 23:21:18 | 0:13:17 | 0:06:36 | 0:06:41 | smithi | main | centos | 9.stream | orch:cephadm/osds/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-ops/rmdir-reactivate} | 2 | |
Failure Reason:
Command failed on smithi191 with status 125: "sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:43be020184947e53516056c9931e1ac5bdbbb1a5 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 93431434-00fe-11ef-bc93-c7b262605968 -- ceph orch apply mon '2;smithi029:172.21.15.29=smithi029;smithi191:172.21.15.191=smithi191'" |
||||||||||||||
pass | 7669200 | 2024-04-22 22:46:13 | 2024-04-22 23:08:01 | 2024-04-22 23:47:21 | 0:39:20 | 0:33:06 | 0:06:14 | smithi | main | centos | 9.stream | orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mgr mon osd} fail_fs/no overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn pg_health syntax} roles tasks/{0-from/quincy 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-inline/no 4-verify} 2-client/kclient 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} | 2 | |
fail | 7669201 | 2024-04-22 22:46:14 | 2024-04-22 23:08:02 | 2024-04-22 23:21:08 | 0:13:06 | 0:06:22 | 0:06:44 | smithi | main | centos | 9.stream | orch:cephadm/smoke-roleless/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-services/rgw-ingress 3-final} | 2 | |
Failure Reason:
Command failed on smithi148 with status 125: "sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:43be020184947e53516056c9931e1ac5bdbbb1a5 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 80e50022-00fe-11ef-bc93-c7b262605968 -- ceph orch apply mon '2;smithi131:172.21.15.131=smithi131;smithi148:172.21.15.148=smithi148'" |
||||||||||||||
pass | 7669202 | 2024-04-22 22:46:15 | 2024-04-22 23:08:02 | 2024-04-22 23:27:46 | 0:19:44 | 0:11:44 | 0:08:00 | smithi | main | centos | 9.stream | orch:cephadm/smoke-small/{0-distro/centos_9.stream_runc 0-nvme-loop agent/on fixed-2 mon_election/connectivity start} | 3 | |
dead | 7669203 | 2024-04-22 22:46:16 | 2024-04-22 23:08:03 | 2024-04-22 23:09:17 | 0:01:14 | smithi | main | ubuntu | 22.04 | orch:cephadm/upgrade/{1-start-distro/1-start-ubuntu_22.04 2-repo_digest/repo_digest 3-upgrade/simple 4-wait 5-upgrade-ls agent/on mon_election/connectivity} | 2 | |||
Failure Reason:
Error reimaging machines: Failed to power on smithi113 |
||||||||||||||
dead | 7669204 | 2024-04-22 22:46:17 | 2024-04-22 23:08:13 | 2024-04-22 23:09:17 | 0:01:04 | smithi | main | centos | 9.stream | orch:cephadm/workunits/{0-distro/centos_9.stream_runc agent/on mon_election/connectivity task/test_set_mon_crush_locations} | 3 | |||
Failure Reason:
Error reimaging machines: Failed to power on smithi090 |
||||||||||||||
fail | 7669205 | 2024-04-22 22:46:18 | 2024-04-22 23:08:13 | 2024-04-22 23:24:09 | 0:15:56 | 0:08:23 | 0:07:33 | smithi | main | centos | 9.stream | orch:cephadm/smoke-roleless/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-services/rgw 3-final} | 2 | |
Failure Reason:
Command failed on smithi178 with status 125: "sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:43be020184947e53516056c9931e1ac5bdbbb1a5 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid a2f0a036-00fe-11ef-bc93-c7b262605968 -- ceph orch apply mon '2;smithi173:172.21.15.173=smithi173;smithi178:172.21.15.178=smithi178'" |
||||||||||||||
pass | 7669206 | 2024-04-22 22:46:19 | 2024-04-22 23:08:14 | 2024-04-22 23:32:27 | 0:24:13 | 0:13:40 | 0:10:33 | smithi | main | ubuntu | 22.04 | orch:cephadm/no-agent-workunits/{0-distro/ubuntu_22.04 mon_election/classic task/test_adoption} | 1 | |
fail | 7669207 | 2024-04-22 22:46:20 | 2024-04-22 23:08:14 | 2024-04-22 23:24:02 | 0:15:48 | 0:07:41 | 0:08:07 | smithi | main | centos | 9.stream | orch:cephadm/osds/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-ops/deploy-raw} | 2 | |
Failure Reason:
Command failed on smithi204 with status 125: "sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:43be020184947e53516056c9931e1ac5bdbbb1a5 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid ab62d4f0-00fe-11ef-bc93-c7b262605968 -- ceph orch apply mon '2;smithi103:172.21.15.103=smithi103;smithi204:172.21.15.204=smithi204'" |
||||||||||||||
fail | 7669208 | 2024-04-22 22:46:21 | 2024-04-22 23:08:14 | 2024-04-22 23:33:04 | 0:24:50 | 0:13:13 | 0:11:37 | smithi | main | ubuntu | 22.04 | orch:cephadm/smb/{0-distro/ubuntu_22.04 tasks/deploy_smb_basic} | 2 | |
Failure Reason:
"2024-04-22T23:30:50.560762+0000 mon.a (mon.0) 104 : cluster [WRN] Health check failed: failed to probe daemons or devices (CEPHADM_REFRESH_FAILED)" in cluster log |
||||||||||||||
fail | 7669209 | 2024-04-22 22:46:22 | 2024-04-22 23:08:15 | 2024-04-22 23:30:20 | 0:22:05 | 0:13:13 | 0:08:52 | smithi | main | centos | 9.stream | orch:cephadm/smoke/{0-distro/centos_9.stream 0-nvme-loop agent/on fixed-2 mon_election/classic start} | 2 | |
Failure Reason:
"2024-04-22T23:21:50.319982+0000 mon.a (mon.0) 330 : cluster [WRN] Health check failed: 1 failed cephadm daemon(s) ['daemon osd.0 on smithi019 is in unknown state'] (CEPHADM_FAILED_DAEMON)" in cluster log |
||||||||||||||
fail | 7669210 | 2024-04-22 22:46:23 | 2024-04-22 23:08:15 | 2024-04-22 23:47:08 | 0:38:53 | 0:28:04 | 0:10:49 | smithi | main | ubuntu | 22.04 | orch:cephadm/smoke-roleless/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-services/basic 3-final} | 2 | |
Failure Reason:
"2024-04-22T23:29:07.853875+0000 mon.smithi059 (mon.0) 121 : cluster [WRN] Health check failed: failed to probe daemons or devices (CEPHADM_REFRESH_FAILED)" in cluster log |
||||||||||||||
fail | 7669211 | 2024-04-22 22:46:24 | 2024-04-22 23:08:15 | 2024-04-22 23:26:13 | 0:17:58 | 0:09:55 | 0:08:03 | smithi | main | centos | 9.stream | orch:cephadm/thrash/{0-distro/centos_9.stream 1-start 2-thrash 3-tasks/radosbench fixed-2 msgr/async-v1only root} | 2 | |
Failure Reason:
Command failed on smithi064 with status 2: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:43be020184947e53516056c9931e1ac5bdbbb1a5 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f33eecb4-00fe-11ef-bc93-c7b262605968 -- ceph-volume lvm zap /dev/vg_nvme/lv_4' |
||||||||||||||
fail | 7669212 | 2024-04-22 22:46:25 | 2024-04-22 23:08:16 | 2024-04-22 23:34:44 | 0:26:28 | 0:14:07 | 0:12:21 | smithi | main | ubuntu | 22.04 | orch:cephadm/with-work/{0-distro/ubuntu_22.04 fixed-2 mode/root mon_election/connectivity msgr/async-v1only start tasks/rados_api_tests} | 2 | |
Failure Reason:
"2024-04-22T23:31:06.523395+0000 mon.a (mon.0) 104 : cluster [WRN] Health check failed: failed to probe daemons or devices (CEPHADM_REFRESH_FAILED)" in cluster log |
||||||||||||||
fail | 7669213 | 2024-04-22 22:46:26 | 2024-04-22 23:08:16 | 2024-04-22 23:25:56 | 0:17:40 | 0:10:09 | 0:07:31 | smithi | main | centos | 9.stream | orch:cephadm/workunits/{0-distro/centos_9.stream_runc agent/off mon_election/classic task/test_ca_signed_key} | 2 | |
Failure Reason:
Command failed on smithi117 with status 2: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:43be020184947e53516056c9931e1ac5bdbbb1a5 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f6ce019e-00fe-11ef-bc93-c7b262605968 -- ceph-volume lvm zap /dev/vg_nvme/lv_4' |
||||||||||||||
fail | 7669214 | 2024-04-22 22:46:27 | 2024-04-22 23:08:17 | 2024-04-23 00:24:35 | 1:16:18 | 1:08:06 | 0:08:12 | smithi | main | centos | 9.stream | orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mgr mon osd} fail_fs/yes overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn pg_health syntax} roles tasks/{0-from/reef/{reef} 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-inline/yes 4-verify} 2-client/fuse 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} | 2 | |
Failure Reason:
reached maximum tries (51) after waiting for 300 seconds |
||||||||||||||
fail | 7669215 | 2024-04-22 22:46:28 | 2024-04-22 23:08:17 | 2024-04-22 23:24:18 | 0:16:01 | 0:07:32 | 0:08:29 | smithi | main | centos | 9.stream | orch:cephadm/smoke-roleless/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-services/client-keyring 3-final} | 2 | |
Failure Reason:
Command failed on smithi194 with status 125: "sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:43be020184947e53516056c9931e1ac5bdbbb1a5 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid a8065548-00fe-11ef-bc93-c7b262605968 -- ceph orch apply mon '2;smithi096:172.21.15.96=smithi096;smithi194:172.21.15.194=smithi194'" |
||||||||||||||
fail | 7669216 | 2024-04-22 22:46:29 | 2024-04-22 23:08:17 | 2024-04-22 23:33:24 | 0:25:07 | 0:14:30 | 0:10:37 | smithi | main | ubuntu | 22.04 | orch:cephadm/workunits/{0-distro/ubuntu_22.04 agent/on mon_election/connectivity task/test_cephadm} | 1 | |
Failure Reason:
Command failed (workunit test cephadm/test_cephadm.sh) on smithi113 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=43be020184947e53516056c9931e1ac5bdbbb1a5 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_cephadm.sh' |
||||||||||||||
dead | 7669217 | 2024-04-22 22:46:30 | 2024-04-22 23:09:18 | 2024-04-23 11:18:39 | 12:09:21 | smithi | main | centos | 9.stream | orch:cephadm/smoke-roleless/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-services/iscsi 3-final} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 7669218 | 2024-04-22 22:46:32 | 2024-04-22 23:09:28 | 2024-04-22 23:49:56 | 0:40:28 | 0:28:46 | 0:11:42 | smithi | main | ubuntu | 22.04 | orch:cephadm/osds/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-ops/repave-all} | 2 | |
Failure Reason:
"2024-04-22T23:31:52.286580+0000 mon.smithi063 (mon.0) 118 : cluster [WRN] Health check failed: failed to probe daemons or devices (CEPHADM_REFRESH_FAILED)" in cluster log |
||||||||||||||
fail | 7669219 | 2024-04-22 22:46:33 | 2024-04-22 23:09:29 | 2024-04-22 23:48:18 | 0:38:49 | 0:28:37 | 0:10:12 | smithi | main | ubuntu | 22.04 | orch:cephadm/smoke-roleless/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-services/jaeger 3-final} | 2 | |
Failure Reason:
"2024-04-22T23:29:57.089801+0000 mon.smithi102 (mon.0) 118 : cluster [WRN] Health check failed: failed to probe daemons or devices (CEPHADM_REFRESH_FAILED)" in cluster log |
||||||||||||||
fail | 7669220 | 2024-04-22 22:46:34 | 2024-04-22 23:10:29 | 2024-04-22 23:26:01 | 0:15:32 | 0:09:14 | 0:06:18 | smithi | main | centos | 9.stream | orch:cephadm/thrash/{0-distro/centos_9.stream_runc 1-start 2-thrash 3-tasks/small-objects fixed-2 msgr/async-v2only root} | 2 | |
Failure Reason:
Command failed on smithi116 with status 2: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:43be020184947e53516056c9931e1ac5bdbbb1a5 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 2ddb6460-00ff-11ef-bc93-c7b262605968 -- ceph-volume lvm zap /dev/vg_nvme/lv_4' |
||||||||||||||
fail | 7669221 | 2024-04-22 22:46:35 | 2024-04-22 23:10:40 | 2024-04-22 23:25:37 | 0:14:57 | 0:08:58 | 0:05:59 | smithi | main | centos | 9.stream | orch:cephadm/with-work/{0-distro/centos_9.stream fixed-2 mode/packaged mon_election/classic msgr/async-v2only start tasks/rados_python} | 2 | |
Failure Reason:
Command failed on smithi073 with status 2: 'sudo cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:43be020184947e53516056c9931e1ac5bdbbb1a5 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 0b538e18-00ff-11ef-bc93-c7b262605968 -- ceph-volume lvm zap /dev/vg_nvme/lv_4' |
||||||||||||||
fail | 7669222 | 2024-04-22 22:46:36 | 2024-04-22 23:10:40 | 2024-04-22 23:23:30 | 0:12:50 | 0:05:39 | 0:07:11 | smithi | main | centos | 9.stream | orch:cephadm/workunits/{0-distro/centos_9.stream agent/off mon_election/classic task/test_cephadm_repos} | 1 | |
Failure Reason:
Command failed (workunit test cephadm/test_repos.sh) on smithi120 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=43be020184947e53516056c9931e1ac5bdbbb1a5 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_repos.sh' |
||||||||||||||
fail | 7669223 | 2024-04-22 22:46:37 | 2024-04-22 23:10:40 | 2024-04-22 23:52:04 | 0:41:24 | 0:31:16 | 0:10:08 | smithi | main | centos | 9.stream | orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mgr mon osd} fail_fs/no overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn pg_health syntax} roles tasks/{0-from/quincy 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-inline/no 4-verify} 2-client/kclient 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} | 2 | |
Failure Reason:
"2024-04-22T23:40:00.000110+0000 mon.smithi107 (mon.0) 316 : cluster [WRN] Health detail: HEALTH_WARN 1 osds down; Degraded data redundancy: 54/339 objects degraded (15.929%), 16 pgs degraded" in cluster log |
||||||||||||||
fail | 7669224 | 2024-04-22 22:46:38 | 2024-04-22 23:12:21 | 2024-04-22 23:27:13 | 0:14:52 | 0:07:54 | 0:06:58 | smithi | main | centos | 9.stream | orch:cephadm/no-agent-workunits/{0-distro/centos_9.stream mon_election/connectivity task/test_cephadm_timeout} | 1 | |
Failure Reason:
Command failed on smithi195 with status 2: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:43be020184947e53516056c9931e1ac5bdbbb1a5 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 553ac258-00ff-11ef-bc93-c7b262605968 -- ceph-volume lvm zap /dev/vg_nvme/lv_4' |
||||||||||||||
pass | 7669225 | 2024-04-22 22:46:39 | 2024-04-22 23:12:21 | 2024-04-22 23:33:28 | 0:21:07 | 0:11:00 | 0:10:07 | smithi | main | centos | 9.stream | orch:cephadm/orchestrator_cli/{0-random-distro$/{centos_9.stream_runc} 2-node-mgr agent/on orchestrator_cli} | 2 | |
fail | 7669226 | 2024-04-22 22:46:40 | 2024-04-22 23:16:22 | 2024-04-22 23:31:19 | 0:14:57 | 0:06:57 | 0:08:00 | smithi | main | centos | 9.stream | orch:cephadm/smb/{0-distro/centos_9.stream tasks/deploy_smb_domain} | 2 | |
Failure Reason:
Command failed on smithi152 with status 2: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:43be020184947e53516056c9931e1ac5bdbbb1a5 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f717b11c-00ff-11ef-bc93-c7b262605968 -- ceph-volume lvm zap /dev/vg_nvme/lv_4' |
||||||||||||||
fail | 7669227 | 2024-04-22 22:46:41 | 2024-04-22 23:18:03 | 2024-04-22 23:31:04 | 0:13:01 | 0:06:58 | 0:06:03 | smithi | main | centos | 9.stream | orch:cephadm/smoke-roleless/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-services/mirror 3-final} | 2 | |
Failure Reason:
Command failed on smithi136 with status 125: "sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:43be020184947e53516056c9931e1ac5bdbbb1a5 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d468527a-00ff-11ef-bc93-c7b262605968 -- ceph orch apply mon '2;smithi133:172.21.15.133=smithi133;smithi136:172.21.15.136=smithi136'" |
||||||||||||||
fail | 7669228 | 2024-04-22 22:46:42 | 2024-04-22 23:18:03 | 2024-04-22 23:30:46 | 0:12:43 | 0:06:34 | 0:06:09 | smithi | main | centos | 9.stream | orch:cephadm/smoke-singlehost/{0-random-distro$/{centos_9.stream_runc} 1-start 2-services/rgw 3-final} | 1 | |
Failure Reason:
Command failed on smithi005 with status 2: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:43be020184947e53516056c9931e1ac5bdbbb1a5 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid cc5a7f40-00ff-11ef-bc93-c7b262605968 -- ceph-volume lvm zap /dev/vg_nvme/lv_4' |
||||||||||||||
fail | 7669229 | 2024-04-22 22:46:43 | 2024-04-22 23:18:03 | 2024-04-22 23:40:46 | 0:22:43 | 0:10:58 | 0:11:45 | smithi | main | centos | 9.stream | orch:cephadm/smoke-small/{0-distro/centos_9.stream_runc 0-nvme-loop agent/on fixed-2 mon_election/classic start} | 3 | |
Failure Reason:
"2024-04-22T23:38:09.641011+0000 mon.a (mon.0) 509 : cluster [WRN] Health check failed: 1 failed cephadm daemon(s) ['daemon osd.2 on smithi151 is in unknown state'] (CEPHADM_FAILED_DAEMON)" in cluster log |
||||||||||||||
fail | 7669230 | 2024-04-22 22:46:44 | 2024-04-22 23:23:35 | 2024-04-23 00:14:22 | 0:50:47 | 0:43:08 | 0:07:39 | smithi | main | centos | 9.stream | orch:cephadm/upgrade/{1-start-distro/1-start-centos_9.stream 2-repo_digest/defaut 3-upgrade/staggered 4-wait 5-upgrade-ls agent/on mon_election/classic} | 2 | |
Failure Reason:
Command failed on smithi045 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid e0e8718c-0100-11ef-bc93-c7b262605968 -e sha1=43be020184947e53516056c9931e1ac5bdbbb1a5 -- bash -c \'ceph orch upgrade check quay.ceph.io/ceph-ci/ceph:$sha1 | jq -e \'"\'"\'.up_to_date | length == 7\'"\'"\'\'' |
||||||||||||||
fail | 7669231 | 2024-04-22 22:46:46 | 2024-04-22 23:23:45 | 2024-04-22 23:36:41 | 0:12:56 | 0:06:29 | 0:06:27 | smithi | main | centos | 9.stream | orch:cephadm/smoke-roleless/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-services/nfs-haproxy-proto 3-final} | 2 | |
Failure Reason:
Command failed on smithi173 with status 125: "sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:43be020184947e53516056c9931e1ac5bdbbb1a5 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid a06f0fb2-0100-11ef-bc93-c7b262605968 -- ceph orch apply mon '2;smithi094:172.21.15.94=smithi094;smithi173:172.21.15.173=smithi173'" |
||||||||||||||
fail | 7669232 | 2024-04-22 22:46:47 | 2024-04-22 23:23:45 | 2024-04-22 23:38:50 | 0:15:05 | 0:08:09 | 0:06:56 | smithi | main | centos | 9.stream | orch:cephadm/osds/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-ops/rm-zap-add} | 2 | |
Failure Reason:
Command failed on smithi204 with status 125: "sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:43be020184947e53516056c9931e1ac5bdbbb1a5 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid ad846f1c-0100-11ef-bc93-c7b262605968 -- ceph orch apply mon '2;smithi189:172.21.15.189=smithi189;smithi204:172.21.15.204=smithi204'" |
||||||||||||||
pass | 7669233 | 2024-04-22 22:46:48 | 2024-04-22 23:23:46 | 2024-04-22 23:42:57 | 0:19:11 | 0:13:08 | 0:06:03 | smithi | main | centos | 9.stream | orch:cephadm/smoke/{0-distro/centos_9.stream_runc 0-nvme-loop agent/off fixed-2 mon_election/connectivity start} | 2 | |
fail | 7669234 | 2024-04-22 22:46:49 | 2024-04-22 23:23:46 | 2024-04-22 23:41:34 | 0:17:48 | 0:09:12 | 0:08:36 | smithi | main | centos | 9.stream | orch:cephadm/workunits/{0-distro/centos_9.stream_runc agent/on mon_election/connectivity task/test_extra_daemon_features} | 2 | |
Failure Reason:
Command failed on smithi071 with status 2: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:43be020184947e53516056c9931e1ac5bdbbb1a5 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 16541a1a-0101-11ef-bc93-c7b262605968 -- ceph-volume lvm zap /dev/vg_nvme/lv_4' |
||||||||||||||
fail | 7669235 | 2024-04-22 22:46:50 | 2024-04-22 23:23:57 | 2024-04-23 00:02:11 | 0:38:14 | 0:27:52 | 0:10:22 | smithi | main | ubuntu | 22.04 | orch:cephadm/smoke-roleless/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-services/nfs-ingress-rgw-bucket 3-final} | 2 | |
Failure Reason:
"2024-04-22T23:43:35.556729+0000 mon.smithi047 (mon.0) 121 : cluster [WRN] Health check failed: failed to probe daemons or devices (CEPHADM_REFRESH_FAILED)" in cluster log |
||||||||||||||
fail | 7669236 | 2024-04-22 22:46:51 | 2024-04-22 23:23:57 | 2024-04-23 00:08:42 | 0:44:45 | 0:37:16 | 0:07:29 | smithi | main | centos | 9.stream | orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mgr mon osd} fail_fs/yes overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn pg_health syntax} roles tasks/{0-from/reef/{v18.2.1} 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-inline/yes 4-verify} 2-client/fuse 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} | 2 | |
Failure Reason:
reached maximum tries (51) after waiting for 300 seconds |
||||||||||||||
fail | 7669237 | 2024-04-22 22:46:52 | 2024-04-22 23:23:57 | 2024-04-22 23:37:43 | 0:13:46 | 0:07:15 | 0:06:31 | smithi | main | centos | 9.stream | orch:cephadm/smoke-roleless/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-services/nfs-ingress-rgw-user 3-final} | 2 | |
Failure Reason:
Command failed on smithi123 with status 125: "sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:43be020184947e53516056c9931e1ac5bdbbb1a5 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid c50ba4fc-0100-11ef-bc93-c7b262605968 -- ceph orch apply mon '2;smithi089:172.21.15.89=smithi089;smithi123:172.21.15.123=smithi123'" |
||||||||||||||
fail | 7669238 | 2024-04-22 22:46:53 | 2024-04-22 23:23:58 | 2024-04-22 23:47:20 | 0:23:22 | 0:13:44 | 0:09:38 | smithi | main | ubuntu | 22.04 | orch:cephadm/thrash/{0-distro/ubuntu_22.04 1-start 2-thrash 3-tasks/snaps-few-objects fixed-2 msgr/async root} | 2 | |
Failure Reason:
"2024-04-22T23:45:07.986813+0000 mon.a (mon.0) 101 : cluster [WRN] Health check failed: failed to probe daemons or devices (CEPHADM_REFRESH_FAILED)" in cluster log |
||||||||||||||
fail | 7669239 | 2024-04-22 22:46:54 | 2024-04-22 23:23:58 | 2024-04-22 23:42:11 | 0:18:13 | 0:09:36 | 0:08:37 | smithi | main | centos | 9.stream | orch:cephadm/with-work/{0-distro/centos_9.stream_runc fixed-2 mode/root mon_election/connectivity msgr/async start tasks/rotate-keys} | 2 | |
Failure Reason:
Command failed on smithi086 with status 2: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:43be020184947e53516056c9931e1ac5bdbbb1a5 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 3f6660f2-0101-11ef-bc93-c7b262605968 -- ceph-volume lvm zap /dev/vg_nvme/lv_4' |
||||||||||||||
fail | 7669240 | 2024-04-22 22:46:55 | 2024-04-22 23:23:58 | 2024-04-22 23:48:15 | 0:24:17 | 0:13:53 | 0:10:24 | smithi | main | ubuntu | 22.04 | orch:cephadm/workunits/{0-distro/ubuntu_22.04 agent/off mon_election/classic task/test_host_drain} | 3 | |
Failure Reason:
"2024-04-22T23:45:02.602969+0000 mon.a (mon.0) 102 : cluster [WRN] Health check failed: failed to probe daemons or devices (CEPHADM_REFRESH_FAILED)" in cluster log |
||||||||||||||
fail | 7669241 | 2024-04-22 22:46:56 | 2024-04-22 23:23:59 | 2024-04-22 23:39:36 | 0:15:37 | 0:08:07 | 0:07:30 | smithi | main | centos | 9.stream | orch:cephadm/no-agent-workunits/{0-distro/centos_9.stream_runc mon_election/classic task/test_orch_cli} | 1 | |
Failure Reason:
Command failed on smithi191 with status 2: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:43be020184947e53516056c9931e1ac5bdbbb1a5 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 0f7837d0-0101-11ef-bc93-c7b262605968 -- ceph-volume lvm zap /dev/vg_nvme/lv_4' |
||||||||||||||
fail | 7669242 | 2024-04-22 22:46:57 | 2024-04-22 23:23:59 | 2024-04-22 23:38:06 | 0:14:07 | 0:07:03 | 0:07:04 | smithi | main | centos | 9.stream | orch:cephadm/osds/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-ops/rm-zap-flag} | 2 | |
Failure Reason:
Command failed on smithi052 with status 125: "sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:43be020184947e53516056c9931e1ac5bdbbb1a5 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid dde0908c-0100-11ef-bc93-c7b262605968 -- ceph orch apply mon '2;smithi029:172.21.15.29=smithi029;smithi052:172.21.15.52=smithi052'" |
||||||||||||||
fail | 7669243 | 2024-04-22 22:46:58 | 2024-04-22 23:24:00 | 2024-04-22 23:39:54 | 0:15:54 | 0:08:54 | 0:07:00 | smithi | main | centos | 9.stream | orch:cephadm/smb/{0-distro/centos_9.stream_runc tasks/deploy_smb_basic} | 2 | |
Failure Reason:
Command failed on smithi026 with status 2: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:43be020184947e53516056c9931e1ac5bdbbb1a5 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 28a2da30-0101-11ef-bc93-c7b262605968 -- ceph-volume lvm zap /dev/vg_nvme/lv_4' |
||||||||||||||
fail | 7669244 | 2024-04-22 22:46:59 | 2024-04-22 23:24:00 | 2024-04-22 23:39:33 | 0:15:33 | 0:07:27 | 0:08:06 | smithi | main | centos | 9.stream | orch:cephadm/smoke-roleless/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-services/nfs-ingress 3-final} | 2 | |
Failure Reason:
Command failed on smithi193 with status 125: "sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:43be020184947e53516056c9931e1ac5bdbbb1a5 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid c80a7cb4-0100-11ef-bc93-c7b262605968 -- ceph orch apply mon '2;smithi110:172.21.15.110=smithi110;smithi193:172.21.15.193=smithi193'" |
||||||||||||||
fail | 7669245 | 2024-04-22 22:47:00 | 2024-04-22 23:24:00 | 2024-04-22 23:38:45 | 0:14:45 | 0:08:38 | 0:06:07 | smithi | main | centos | 9.stream | orch:cephadm/workunits/{0-distro/centos_9.stream agent/on mon_election/connectivity task/test_iscsi_container/{centos_9.stream test_iscsi_container}} | 1 | |
Failure Reason:
Command failed on smithi050 with status 2: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:43be020184947e53516056c9931e1ac5bdbbb1a5 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 07183536-0101-11ef-bc93-c7b262605968 -- ceph-volume lvm zap /dev/vg_nvme/lv_4' |
||||||||||||||
fail | 7669246 | 2024-04-22 22:47:01 | 2024-04-22 23:24:01 | 2024-04-23 00:02:59 | 0:38:58 | 0:27:49 | 0:11:09 | smithi | main | ubuntu | 22.04 | orch:cephadm/smoke-roleless/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-services/nfs-ingress2 3-final} | 2 | |
Failure Reason:
"2024-04-22T23:44:34.768270+0000 mon.smithi046 (mon.0) 121 : cluster [WRN] Health check failed: failed to probe daemons or devices (CEPHADM_REFRESH_FAILED)" in cluster log |
||||||||||||||
pass | 7669247 | 2024-04-22 22:47:03 | 2024-04-22 23:24:01 | 2024-04-23 00:01:34 | 0:37:33 | 0:31:10 | 0:06:23 | smithi | main | centos | 9.stream | orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mgr mon osd} fail_fs/no overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn pg_health syntax} roles tasks/{0-from/quincy 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-inline/no 4-verify} 2-client/kclient 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} | 2 | |
fail | 7669248 | 2024-04-22 22:47:04 | 2024-04-22 23:24:01 | 2024-04-22 23:37:52 | 0:13:51 | 0:07:17 | 0:06:34 | smithi | main | centos | 9.stream | orch:cephadm/smoke-roleless/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-services/nfs-keepalive-only 3-final} | 2 | |
Failure Reason:
Command failed on smithi148 with status 125: "sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:43be020184947e53516056c9931e1ac5bdbbb1a5 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid c832ada6-0100-11ef-bc93-c7b262605968 -- ceph orch apply mon '2;smithi131:172.21.15.131=smithi131;smithi148:172.21.15.148=smithi148'" |
||||||||||||||
pass | 7669249 | 2024-04-22 22:47:05 | 2024-04-22 23:24:02 | 2024-04-22 23:44:38 | 0:20:36 | 0:10:01 | 0:10:35 | smithi | main | centos | 9.stream | orch:cephadm/smoke-small/{0-distro/centos_9.stream_runc 0-nvme-loop agent/off fixed-2 mon_election/connectivity start} | 3 | |
fail | 7669250 | 2024-04-22 22:47:06 | 2024-04-22 23:26:53 | 2024-04-23 00:14:36 | 0:47:43 | 0:37:01 | 0:10:42 | smithi | main | ubuntu | 22.04 | orch:cephadm/upgrade/{1-start-distro/1-start-ubuntu_22.04 2-repo_digest/repo_digest 3-upgrade/simple 4-wait 5-upgrade-ls agent/off mon_election/connectivity} | 2 | |
Failure Reason:
"2024-04-22T23:58:36.405019+0000 mon.a (mon.0) 869 : cluster [WRN] Health check failed: failed to probe daemons or devices (CEPHADM_REFRESH_FAILED)" in cluster log |
||||||||||||||
fail | 7669251 | 2024-04-22 22:47:07 | 2024-04-22 23:26:53 | 2024-04-23 00:05:35 | 0:38:42 | 0:28:05 | 0:10:37 | smithi | main | ubuntu | 22.04 | orch:cephadm/osds/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-ops/rm-zap-wait} | 2 | |
Failure Reason:
"2024-04-22T23:46:41.177454+0000 mon.smithi087 (mon.0) 118 : cluster [WRN] Health check failed: failed to probe daemons or devices (CEPHADM_REFRESH_FAILED)" in cluster log |
||||||||||||||
fail | 7669252 | 2024-04-22 22:47:08 | 2024-04-22 23:27:53 | 2024-04-22 23:54:22 | 0:26:29 | 0:12:15 | 0:14:14 | smithi | main | ubuntu | 22.04 | orch:cephadm/smoke/{0-distro/ubuntu_22.04 0-nvme-loop agent/on fixed-2 mon_election/classic start} | 2 | |
Failure Reason:
Command failed on smithi062 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:43be020184947e53516056c9931e1ac5bdbbb1a5 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d36e2dba-0102-11ef-bc93-c7b262605968 -- ceph orch daemon add osd smithi062:/dev/nvme4n1' |
||||||||||||||
fail | 7669253 | 2024-04-22 22:47:09 | 2024-04-22 23:32:34 | 2024-04-22 23:57:27 | 0:24:53 | 0:13:23 | 0:11:30 | smithi | main | ubuntu | 22.04 | orch:cephadm/thrash/{0-distro/ubuntu_22.04 1-start 2-thrash 3-tasks/rados_api_tests fixed-2 msgr/async-v1only root} | 2 | |
Failure Reason:
"2024-04-22T23:54:28.227091+0000 mon.a (mon.0) 104 : cluster [WRN] Health check failed: failed to probe daemons or devices (CEPHADM_REFRESH_FAILED)" in cluster log |
||||||||||||||
fail | 7669254 | 2024-04-22 22:47:10 | 2024-04-22 23:33:35 | 2024-04-22 23:58:27 | 0:24:52 | 0:13:46 | 0:11:06 | smithi | main | ubuntu | 22.04 | orch:cephadm/with-work/{0-distro/ubuntu_22.04 fixed-2 mode/packaged mon_election/classic msgr/async start tasks/rados_api_tests} | 2 | |
Failure Reason:
"2024-04-22T23:56:08.264000+0000 mon.a (mon.0) 101 : cluster [WRN] Health check failed: failed to probe daemons or devices (CEPHADM_REFRESH_FAILED)" in cluster log |
||||||||||||||
fail | 7669255 | 2024-04-22 22:47:11 | 2024-04-22 23:34:36 | 2024-04-22 23:57:44 | 0:23:08 | 0:09:38 | 0:13:30 | smithi | main | centos | 9.stream | orch:cephadm/workunits/{0-distro/centos_9.stream_runc agent/off mon_election/classic task/test_monitoring_stack_basic} | 3 | |
Failure Reason:
Command failed on smithi027 with status 2: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:43be020184947e53516056c9931e1ac5bdbbb1a5 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 62b1ddaa-0103-11ef-bc93-c7b262605968 -- ceph-volume lvm zap /dev/vg_nvme/lv_4' |
||||||||||||||
fail | 7669256 | 2024-04-22 22:47:12 | 2024-04-22 23:39:17 | 2024-04-22 23:53:39 | 0:14:22 | 0:06:06 | 0:08:16 | smithi | main | centos | 9.stream | orch:cephadm/smoke-roleless/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-services/nfs 3-final} | 2 | |
Failure Reason:
Command failed on smithi186 with status 125: "sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:43be020184947e53516056c9931e1ac5bdbbb1a5 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 00653638-0103-11ef-bc93-c7b262605968 -- ceph orch apply mon '2;smithi113:172.21.15.113=smithi113;smithi186:172.21.15.186=smithi186'" |
||||||||||||||
fail | 7669257 | 2024-04-22 22:47:13 | 2024-04-22 23:39:17 | 2024-04-23 00:06:46 | 0:27:29 | 0:15:59 | 0:11:30 | smithi | main | ubuntu | 22.04 | orch:cephadm/no-agent-workunits/{0-distro/ubuntu_22.04 mon_election/connectivity task/test_orch_cli_mon} | 5 | |
Failure Reason:
"2024-04-23T00:03:20.892648+0000 mon.a (mon.0) 101 : cluster [WRN] Health check failed: failed to probe daemons or devices (CEPHADM_REFRESH_FAILED)" in cluster log |
||||||||||||||
fail | 7669258 | 2024-04-22 22:47:14 | 2024-04-22 23:39:17 | 2024-04-23 00:02:24 | 0:23:07 | 0:11:58 | 0:11:09 | smithi | main | ubuntu | 22.04 | orch:cephadm/smb/{0-distro/ubuntu_22.04 tasks/deploy_smb_domain} | 2 | |
Failure Reason:
"2024-04-23T00:00:58.278369+0000 mon.a (mon.0) 104 : cluster [WRN] Health check failed: failed to probe daemons or devices (CEPHADM_REFRESH_FAILED)" in cluster log |
||||||||||||||
fail | 7669259 | 2024-04-22 22:47:15 | 2024-04-22 23:39:18 | 2024-04-23 00:18:21 | 0:39:03 | 0:27:38 | 0:11:25 | smithi | main | ubuntu | 22.04 | orch:cephadm/smoke-roleless/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-services/nfs2 3-final} | 2 | |
Failure Reason:
"2024-04-22T23:59:16.851941+0000 mon.smithi081 (mon.0) 121 : cluster [WRN] Health check failed: failed to probe daemons or devices (CEPHADM_REFRESH_FAILED)" in cluster log |
||||||||||||||
fail | 7669260 | 2024-04-22 22:47:16 | 2024-04-22 23:39:18 | 2024-04-23 00:05:11 | 0:25:53 | 0:14:13 | 0:11:40 | smithi | main | ubuntu | 22.04 | orch:cephadm/workunits/{0-distro/ubuntu_22.04 agent/on mon_election/connectivity task/test_rgw_multisite} | 3 | |
Failure Reason:
Command failed on smithi167 with status 2: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:43be020184947e53516056c9931e1ac5bdbbb1a5 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 575c0a38-0104-11ef-bc93-c7b262605968 -- ceph-volume lvm zap /dev/vg_nvme/lv_4' |
||||||||||||||
fail | 7669261 | 2024-04-22 22:47:17 | 2024-04-22 23:39:18 | 2024-04-23 00:22:21 | 0:43:03 | 0:36:46 | 0:06:17 | smithi | main | centos | 9.stream | orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mgr mon osd} fail_fs/yes overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn pg_health syntax} roles tasks/{0-from/reef/{v18.2.0} 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-inline/yes 4-verify} 2-client/fuse 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} | 2 | |
Failure Reason:
reached maximum tries (51) after waiting for 300 seconds |
||||||||||||||
fail | 7669262 | 2024-04-22 22:47:18 | 2024-04-22 23:39:19 | 2024-04-22 23:52:15 | 0:12:56 | 0:06:25 | 0:06:31 | smithi | main | centos | 9.stream | orch:cephadm/smoke-roleless/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-services/nvmeof 3-final} | 2 | |
Failure Reason:
Command failed on smithi130 with status 125: "sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:43be020184947e53516056c9931e1ac5bdbbb1a5 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d1bc11ee-0102-11ef-bc93-c7b262605968 -- ceph orch apply mon '2;smithi032:172.21.15.32=smithi032;smithi130:172.21.15.130=smithi130'" |
||||||||||||||
fail | 7669263 | 2024-04-22 22:47:19 | 2024-04-22 23:39:19 | 2024-04-22 23:54:10 | 0:14:51 | 0:06:34 | 0:08:17 | smithi | main | centos | 9.stream | orch:cephadm/osds/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-ops/rmdir-reactivate} | 2 | |
Failure Reason:
Command failed on smithi066 with status 125: "sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:43be020184947e53516056c9931e1ac5bdbbb1a5 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 1fb9b2ac-0103-11ef-bc93-c7b262605968 -- ceph orch apply mon '2;smithi064:172.21.15.64=smithi064;smithi066:172.21.15.66=smithi066'" |
||||||||||||||
fail | 7669264 | 2024-04-22 22:47:21 | 2024-04-22 23:39:19 | 2024-04-22 23:54:39 | 0:15:20 | 0:06:22 | 0:08:58 | smithi | main | centos | 9.stream | orch:cephadm/smoke-roleless/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-services/rgw-ingress 3-final} | 2 | |
Failure Reason:
Command failed on smithi165 with status 125: "sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:43be020184947e53516056c9931e1ac5bdbbb1a5 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 26277ed0-0103-11ef-bc93-c7b262605968 -- ceph orch apply mon '2;smithi117:172.21.15.117=smithi117;smithi165:172.21.15.165=smithi165'" |
||||||||||||||
fail | 7669265 | 2024-04-22 22:47:22 | 2024-04-22 23:39:20 | 2024-04-22 23:55:08 | 0:15:48 | 0:08:59 | 0:06:49 | smithi | main | centos | 9.stream | orch:cephadm/thrash/{0-distro/centos_9.stream 1-start 2-thrash 3-tasks/radosbench fixed-2 msgr/async-v2only root} | 2 | |
Failure Reason:
Command failed on smithi089 with status 2: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:43be020184947e53516056c9931e1ac5bdbbb1a5 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 4c634854-0103-11ef-bc93-c7b262605968 -- ceph-volume lvm zap /dev/vg_nvme/lv_4' |
||||||||||||||
fail | 7669266 | 2024-04-22 22:47:23 | 2024-04-22 23:39:20 | 2024-04-22 23:56:33 | 0:17:13 | 0:08:29 | 0:08:44 | smithi | main | centos | 9.stream | orch:cephadm/with-work/{0-distro/centos_9.stream fixed-2 mode/root mon_election/connectivity msgr/async-v1only start tasks/rados_python} | 2 | |
Failure Reason:
Command failed on smithi082 with status 2: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:43be020184947e53516056c9931e1ac5bdbbb1a5 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 6cc844f0-0103-11ef-bc93-c7b262605968 -- ceph-volume lvm zap /dev/vg_nvme/lv_4' |