User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Pass | Fail | Dead |
---|---|---|---|---|---|---|---|---|---|---|---|
adking | 2024-04-24 11:41:41 | 2024-04-24 12:41:56 | 2024-04-25 01:10:46 | 12:28:50 | orch:cephadm | wip-adk-testing-2024-04-23-1222 | smithi | 23fcfb9 | 1 | 9 | 1 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
fail | 7671316 | 2024-04-24 11:41:46 | 2024-04-24 12:41:56 | 2024-04-24 13:21:01 | 0:39:05 | 0:27:35 | 0:11:30 | smithi | main | ubuntu | 22.04 | orch:cephadm/smoke-roleless/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-services/nfs-ingress-rgw-user 3-final} | 2 | |
Failure Reason:
"2024-04-24T13:02:02.315587+0000 mon.smithi086 (mon.0) 121 : cluster [WRN] Health check failed: failed to probe daemons or devices (CEPHADM_REFRESH_FAILED)" in cluster log |
||||||||||||||
fail | 7671317 | 2024-04-24 11:41:48 | 2024-04-24 12:43:07 | 2024-04-24 13:03:13 | 0:20:06 | 0:08:43 | 0:11:23 | smithi | main | centos | 9.stream | orch:cephadm/workunits/{0-distro/centos_9.stream agent/on mon_election/connectivity task/test_host_drain} | 3 | |
Failure Reason:
Command failed on smithi149 with status 125: "sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:23fcfb96e7e1a49d12a94e3f87a8e3f06db2a1ec shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 8f98a79e-023a-11ef-bc93-c7b262605968 -- ceph orch apply mon '3;smithi090:172.21.15.90=a;smithi130:172.21.15.130=b;smithi149:172.21.15.149=c'" |
||||||||||||||
fail | 7671318 | 2024-04-24 11:41:49 | 2024-04-24 12:47:28 | 2024-04-24 13:05:55 | 0:18:27 | 0:09:48 | 0:08:39 | smithi | main | centos | 9.stream | orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mgr mon osd} fail_fs/no overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn pg_health syntax} roles tasks/{0-from/quincy 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-inline/no 4-verify} 2-client/kclient 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} | 2 | |
Failure Reason:
Command failed on smithi006 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:quincy ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 9faf016e-023a-11ef-bc93-c7b262605968 -- ceph-volume lvm zap /dev/vg_nvme/lv_4' |
||||||||||||||
fail | 7671319 | 2024-04-24 11:41:50 | 2024-04-24 12:48:39 | 2024-04-24 13:04:01 | 0:15:22 | 0:06:52 | 0:08:30 | smithi | main | centos | 9.stream | orch:cephadm/mgr-nfs-upgrade/{0-centos_9.stream 1-bootstrap/17.2.0 1-start 2-nfs 3-upgrade-with-workload 4-final} | 2 | |
Failure Reason:
Command failed on smithi069 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 9150213e-023a-11ef-bc93-c7b262605968 -- ceph-volume lvm zap /dev/vg_nvme/lv_4' |
||||||||||||||
fail | 7671320 | 2024-04-24 11:41:51 | 2024-04-24 12:50:59 | 2024-04-24 13:12:46 | 0:21:47 | 0:12:30 | 0:09:17 | smithi | main | ubuntu | 22.04 | orch:cephadm/nfs/{cluster/{1-node} conf/{client mds mgr mon osd} overrides/{ignorelist_health pg_health} supported-random-distros$/{ubuntu_latest} tasks/nfs} | 1 | |
Failure Reason:
"2024-04-24T13:11:24.196896+0000 mon.a (mon.0) 101 : cluster [WRN] Health check failed: failed to probe daemons or devices (CEPHADM_REFRESH_FAILED)" in cluster log |
||||||||||||||
fail | 7671321 | 2024-04-24 11:41:52 | 2024-04-24 12:51:00 | 2024-04-24 13:06:36 | 0:15:36 | 0:08:35 | 0:07:01 | smithi | main | centos | 9.stream | orch:cephadm/no-agent-workunits/{0-distro/centos_9.stream mon_election/classic task/test_orch_cli} | 1 | |
Failure Reason:
Command failed on smithi165 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:23fcfb96e7e1a49d12a94e3f87a8e3f06db2a1ec ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 0a8bd1e2-023b-11ef-bc93-c7b262605968 -- ceph-volume lvm zap /dev/vg_nvme/lv_4' |
||||||||||||||
pass | 7671322 | 2024-04-24 11:41:53 | 2024-04-24 12:51:20 | 2024-04-24 13:16:19 | 0:24:59 | 0:11:32 | 0:13:27 | smithi | main | centos | 9.stream | orch:cephadm/orchestrator_cli/{0-random-distro$/{centos_9.stream_runc} 2-node-mgr agent/off orchestrator_cli} | 2 | |
fail | 7671323 | 2024-04-24 11:41:54 | 2024-04-24 12:57:01 | 2024-04-24 13:34:51 | 0:37:50 | 0:27:56 | 0:09:54 | smithi | main | ubuntu | 22.04 | orch:cephadm/osds/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-ops/rm-zap-flag} | 2 | |
Failure Reason:
"2024-04-24T13:15:48.007164+0000 mon.smithi142 (mon.0) 118 : cluster [WRN] Health check failed: failed to probe daemons or devices (CEPHADM_REFRESH_FAILED)" in cluster log |
||||||||||||||
fail | 7671324 | 2024-04-24 11:41:55 | 2024-04-24 12:57:12 | 2024-04-24 13:11:47 | 0:14:35 | 0:06:54 | 0:07:41 | smithi | main | centos | 9.stream | orch:cephadm/rbd_iscsi/{0-single-container-host base/install cluster/{fixed-3 openstack} conf/{disable-pool-app} workloads/cephadm_iscsi} | 3 | |
Failure Reason:
Command failed on smithi029 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:23fcfb96e7e1a49d12a94e3f87a8e3f06db2a1ec ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid adc509be-023b-11ef-bc93-c7b262605968 -- ceph-volume lvm zap /dev/vg_nvme/lv_4' |
||||||||||||||
fail | 7671325 | 2024-04-24 11:41:56 | 2024-04-24 12:58:32 | 2024-04-24 13:14:41 | 0:16:09 | 0:08:50 | 0:07:19 | smithi | main | centos | 9.stream | orch:cephadm/smb/{0-distro/centos_9.stream tasks/deploy_smb_basic} | 2 | |
Failure Reason:
Command failed on smithi032 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:23fcfb96e7e1a49d12a94e3f87a8e3f06db2a1ec ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 37ddb808-023c-11ef-bc93-c7b262605968 -- ceph-volume lvm zap /dev/vg_nvme/lv_4' |
||||||||||||||
dead | 7671326 | 2024-04-24 11:41:57 | 2024-04-24 12:59:33 | 2024-04-25 01:10:46 | 12:11:13 | smithi | main | centos | 9.stream | orch:cephadm/smoke-roleless/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-services/nfs-ingress 3-final} | 2 | |||
Failure Reason:
hit max job timeout |