User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Pass | Fail | Dead |
---|---|---|---|---|---|---|---|---|---|---|---|
adking | 2022-07-19 23:27:22 | 2022-07-20 09:15:21 | 2022-07-20 21:59:01 | 12:43:40 | orch:cephadm | wip-adk2-testing-2022-07-19-1528 | smithi | 4e24ac8 | 48 | 42 | 2 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
pass | 6938947 | 2022-07-19 23:27:28 | 2022-07-20 09:15:20 | 2022-07-20 09:38:41 | 0:23:21 | 0:17:01 | 0:06:20 | smithi | main | rhel | 8.6 | orch:cephadm/smoke-roleless/{0-distro/rhel_8.6_container_tools_rhel8 0-nvme-loop 1-start 2-services/nfs-ingress-rgw-bucket 3-final} | 2 | |
fail | 6938950 | 2022-07-19 23:27:28 | 2022-07-20 09:15:21 | 2022-07-20 09:32:40 | 0:17:19 | 0:10:49 | 0:06:30 | smithi | main | orch:cephadm/workunits/{agent/off mon_election/connectivity task/test_nfs} | 1 | |||
Failure Reason:
Command failed on smithi139 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:4e24ac81906233a72da6b11568b17b2d97a920ff shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 3dfb7ef2-080e-11ed-842e-001a4aab830c -- ceph-volume lvm zap /dev/vg_nvme/lv_4' |
||||||||||||||
dead | 6938952 | 2022-07-19 23:27:30 | 2022-07-20 09:15:21 | 2022-07-20 21:25:38 | 12:10:17 | smithi | main | centos | 8.stream | orch:cephadm/osds/{0-distro/centos_8.stream_container_tools_crun 0-nvme-loop 1-start 2-ops/rm-zap-flag} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 6938955 | 2022-07-19 23:27:30 | 2022-07-20 09:15:21 | 2022-07-20 09:36:26 | 0:21:05 | 0:14:46 | 0:06:19 | smithi | main | rhel | 8.6 | orch:cephadm/with-work/{0-distro/rhel_8.6_container_tools_3.0 fixed-2 mode/root mon_election/connectivity msgr/async start tasks/rados_python} | 2 | |
Failure Reason:
Command failed on smithi037 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:4e24ac81906233a72da6b11568b17b2d97a920ff shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid dab74b90-080e-11ed-842e-001a4aab830c -- ceph-volume lvm zap /dev/vg_nvme/lv_4' |
||||||||||||||
pass | 6938957 | 2022-07-19 23:27:31 | 2022-07-20 09:15:22 | 2022-07-20 10:00:39 | 0:45:17 | 0:38:03 | 0:07:14 | smithi | main | centos | 8.stream | orch:cephadm/mgr-nfs-upgrade/{0-centos_8.stream_container_tools 1-bootstrap/16.2.4 1-start 2-nfs 3-upgrade-with-workload 4-final} | 2 | |
pass | 6938960 | 2022-07-19 23:27:32 | 2022-07-20 09:15:22 | 2022-07-20 09:41:23 | 0:26:01 | 0:18:06 | 0:07:55 | smithi | main | rhel | 8.6 | orch:cephadm/orchestrator_cli/{0-random-distro$/{rhel_8.6_container_tools_rhel8} 2-node-mgr agent/off orchestrator_cli} | 2 | |
fail | 6938963 | 2022-07-19 23:27:33 | 2022-07-20 09:16:03 | 2022-07-20 09:34:17 | 0:18:14 | 0:10:34 | 0:07:40 | smithi | main | centos | 8.stream | orch:cephadm/rbd_iscsi/{0-single-container-host base/install cluster/{fixed-3 openstack} workloads/cephadm_iscsi} | 3 | |
Failure Reason:
Command failed on smithi029 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:4e24ac81906233a72da6b11568b17b2d97a920ff shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 93b6cea0-080e-11ed-842e-001a4aab830c -- ceph-volume lvm zap /dev/vg_nvme/lv_4' |
||||||||||||||
fail | 6938966 | 2022-07-19 23:27:34 | 2022-07-20 09:17:04 | 2022-07-20 09:34:33 | 0:17:29 | 0:09:33 | 0:07:56 | smithi | main | centos | 8.stream | orch:cephadm/smoke-singlehost/{0-random-distro$/{centos_8.stream_container_tools_crun} 1-start 2-services/basic 3-final} | 1 | |
Failure Reason:
Command failed on smithi111 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:4e24ac81906233a72da6b11568b17b2d97a920ff shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 841798ee-080e-11ed-842e-001a4aab830c -- ceph-volume lvm zap /dev/vg_nvme/lv_4' |
||||||||||||||
pass | 6938968 | 2022-07-19 23:27:35 | 2022-07-20 09:17:04 | 2022-07-20 09:56:38 | 0:39:34 | 0:32:12 | 0:07:22 | smithi | main | centos | 8.stream | orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn syntax} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-inline/yes 4-verify} 2-client 3-upgrade-with-workload 4-verify}} | 2 | |
pass | 6938971 | 2022-07-19 23:27:36 | 2022-07-20 09:17:14 | 2022-07-20 10:18:44 | 1:01:30 | 0:55:36 | 0:05:54 | smithi | main | ubuntu | 20.04 | orch:cephadm/upgrade/{1-start-distro/1-start-ubuntu_20.04-15.2.9 2-repo_digest/repo_digest 3-upgrade/staggered 4-wait 5-upgrade-ls agent/off mon_election/connectivity} | 2 | |
fail | 6938973 | 2022-07-19 23:27:37 | 2022-07-20 09:17:15 | 2022-07-20 09:58:53 | 0:41:38 | 0:31:00 | 0:10:38 | smithi | main | ubuntu | 20.04 | orch:cephadm/smoke-roleless/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-services/nfs-ingress-rgw-user 3-final} | 2 | |
Failure Reason:
reached maximum tries (120) after waiting for 120 seconds |
||||||||||||||
pass | 6938976 | 2022-07-19 23:27:38 | 2022-07-20 09:17:25 | 2022-07-20 09:54:30 | 0:37:05 | 0:25:42 | 0:11:23 | smithi | main | ubuntu | 20.04 | orch:cephadm/smoke/{0-distro/ubuntu_20.04 0-nvme-loop agent/on fixed-2 mon_election/connectivity start} | 2 | |
fail | 6938979 | 2022-07-19 23:27:39 | 2022-07-20 09:17:46 | 2022-07-20 09:39:23 | 0:21:37 | 0:12:10 | 0:09:27 | smithi | main | ubuntu | 20.04 | orch:cephadm/thrash/{0-distro/ubuntu_20.04 1-start 2-thrash 3-tasks/radosbench fixed-2 msgr/async-v1only root} | 2 | |
Failure Reason:
Command failed on smithi006 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:4e24ac81906233a72da6b11568b17b2d97a920ff shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 2961dcf6-080f-11ed-842e-001a4aab830c -- ceph-volume lvm zap /dev/vg_nvme/lv_4' |
||||||||||||||
fail | 6938982 | 2022-07-19 23:27:40 | 2022-07-20 09:18:07 | 2022-07-20 09:39:31 | 0:21:24 | 0:14:37 | 0:06:47 | smithi | main | rhel | 8.6 | orch:cephadm/with-work/{0-distro/rhel_8.6_container_tools_rhel8 fixed-2 mode/packaged mon_election/classic msgr/async-v1only start tasks/rados_api_tests} | 2 | |
Failure Reason:
Command failed on smithi105 with status 1: 'sudo cephadm --image quay.ceph.io/ceph-ci/ceph:4e24ac81906233a72da6b11568b17b2d97a920ff shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 3cf35c4a-080f-11ed-842e-001a4aab830c -- ceph-volume lvm zap /dev/vg_nvme/lv_4' |
||||||||||||||
fail | 6938984 | 2022-07-19 23:27:41 | 2022-07-20 09:18:27 | 2022-07-20 09:36:49 | 0:18:22 | 0:11:17 | 0:07:05 | smithi | main | orch:cephadm/workunits/{agent/on mon_election/classic task/test_orch_cli} | 1 | |||
Failure Reason:
Command failed on smithi072 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:4e24ac81906233a72da6b11568b17b2d97a920ff shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid dd71808a-080e-11ed-842e-001a4aab830c -- ceph-volume lvm zap /dev/vg_nvme/lv_4' |
||||||||||||||
pass | 6938987 | 2022-07-19 23:27:42 | 2022-07-20 09:43:49 | 1080 | smithi | main | centos | 8.stream | orch:cephadm/smoke-roleless/{0-distro/centos_8.stream_container_tools 0-nvme-loop 1-start 2-services/nfs-ingress 3-final} | 2 | ||||
fail | 6938989 | 2022-07-19 23:27:43 | 2022-07-20 09:18:38 | 2022-07-20 09:43:59 | 0:25:21 | 0:11:21 | 0:14:00 | smithi | main | ubuntu | 20.04 | orch:cephadm/with-work/{0-distro/ubuntu_20.04 fixed-2 mode/root mon_election/connectivity msgr/async-v2only start tasks/rados_python} | 2 | |
Failure Reason:
Command failed on smithi060 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:4e24ac81906233a72da6b11568b17b2d97a920ff shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 9b435ab6-080f-11ed-842e-001a4aab830c -- ceph-volume lvm zap /dev/vg_nvme/lv_4' |
||||||||||||||
fail | 6938992 | 2022-07-19 23:27:44 | 2022-07-20 09:18:58 | 2022-07-20 09:42:39 | 0:23:41 | 0:16:49 | 0:06:52 | smithi | main | rhel | 8.6 | orch:cephadm/osds/{0-distro/rhel_8.6_container_tools_3.0 0-nvme-loop 1-start 2-ops/rm-zap-wait} | 2 | |
Failure Reason:
Command failed on smithi149 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:4e24ac81906233a72da6b11568b17b2d97a920ff shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid fdbeb182-080e-11ed-842e-001a4aab830c -- bash -c \'set -e\nset -x\nceph orch ps\nceph orch device ls\nDEVID=$(ceph device ls | grep osd.1 | awk \'"\'"\'{print $1}\'"\'"\')\nHOST=$(ceph orch device ls | grep $DEVID | awk \'"\'"\'{print $1}\'"\'"\')\nDEV=$(ceph orch device ls | grep $DEVID | awk \'"\'"\'{print $2}\'"\'"\')\necho "host $HOST, dev $DEV, devid $DEVID"\nceph orch osd rm 1\nwhile ceph orch osd rm status | grep ^1 ; do sleep 5 ; done\nceph orch device zap $HOST $DEV --force\nwhile ! ceph osd dump | grep osd.1 | grep up ; do sleep 5 ; done\n\'' |
||||||||||||||
pass | 6938995 | 2022-07-19 23:27:45 | 2022-07-20 09:19:19 | 2022-07-20 09:45:48 | 0:26:29 | 0:19:19 | 0:07:10 | smithi | main | centos | 8.stream | orch:cephadm/smoke-roleless/{0-distro/centos_8.stream_container_tools_crun 0-nvme-loop 1-start 2-services/nfs-ingress2 3-final} | 2 | |
pass | 6938998 | 2022-07-19 23:27:46 | 2022-07-20 09:20:49 | 2022-07-20 10:01:44 | 0:40:55 | 0:33:21 | 0:07:34 | smithi | main | centos | 8.stream | orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn syntax} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-inline/no 4-verify} 2-client 3-upgrade-with-workload 4-verify}} | 2 | |
pass | 6939000 | 2022-07-19 23:27:47 | 2022-07-20 09:22:10 | 2022-07-20 09:45:31 | 0:23:21 | 0:17:21 | 0:06:00 | smithi | main | centos | 8.stream | orch:cephadm/smoke/{0-distro/centos_8.stream_container_tools 0-nvme-loop agent/on fixed-2 mon_election/classic start} | 2 | |
fail | 6939002 | 2022-07-19 23:27:48 | 2022-07-20 09:22:31 | 2022-07-20 09:41:26 | 0:18:55 | 0:12:42 | 0:06:13 | smithi | main | centos | 8.stream | orch:cephadm/thrash/{0-distro/centos_8.stream_container_tools 1-start 2-thrash 3-tasks/small-objects fixed-2 msgr/async-v2only root} | 2 | |
Failure Reason:
Command failed on smithi031 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:4e24ac81906233a72da6b11568b17b2d97a920ff shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid a5d02f2c-080f-11ed-842e-001a4aab830c -- ceph-volume lvm zap /dev/vg_nvme/lv_4' |
||||||||||||||
pass | 6939005 | 2022-07-19 23:27:49 | 2022-07-20 09:22:31 | 2022-07-20 10:04:48 | 0:42:17 | 0:34:55 | 0:07:22 | smithi | main | ubuntu | 20.04 | orch:cephadm/upgrade/{1-start-distro/1-start-ubuntu_20.04 2-repo_digest/defaut 3-upgrade/simple 4-wait 5-upgrade-ls agent/on mon_election/classic} | 2 | |
fail | 6939008 | 2022-07-19 23:27:50 | 2022-07-20 09:23:12 | 2022-07-20 09:43:45 | 0:20:33 | 0:12:51 | 0:07:42 | smithi | main | centos | 8.stream | orch:cephadm/with-work/{0-distro/centos_8.stream_container_tools fixed-2 mode/packaged mon_election/classic msgr/async start tasks/rados_api_tests} | 2 | |
Failure Reason:
Command failed on smithi103 with status 1: 'sudo cephadm --image quay.ceph.io/ceph-ci/ceph:4e24ac81906233a72da6b11568b17b2d97a920ff shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid e7b1937c-080f-11ed-842e-001a4aab830c -- ceph-volume lvm zap /dev/vg_nvme/lv_4' |
||||||||||||||
pass | 6939011 | 2022-07-19 23:27:51 | 2022-07-20 09:24:22 | 2022-07-20 09:40:51 | 0:16:29 | 0:09:34 | 0:06:55 | smithi | main | orch:cephadm/workunits/{agent/on mon_election/classic task/test_adoption} | 1 | |||
pass | 6939014 | 2022-07-19 23:27:52 | 2022-07-20 09:25:23 | 2022-07-20 09:49:35 | 0:24:12 | 0:16:39 | 0:07:33 | smithi | main | rhel | 8.6 | orch:cephadm/smoke-roleless/{0-distro/rhel_8.6_container_tools_3.0 0-nvme-loop 1-start 2-services/nfs 3-final} | 2 | |
fail | 6939016 | 2022-07-19 23:27:53 | 2022-07-20 09:26:24 | 2022-07-20 09:46:09 | 0:19:45 | 0:12:09 | 0:07:36 | smithi | main | centos | 8.stream | orch:cephadm/with-work/{0-distro/centos_8.stream_container_tools_crun fixed-2 mode/root mon_election/connectivity msgr/async-v1only start tasks/rados_python} | 2 | |
Failure Reason:
Command failed on smithi019 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:4e24ac81906233a72da6b11568b17b2d97a920ff shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 3c090aa4-0810-11ed-842e-001a4aab830c -- ceph-volume lvm zap /dev/vg_nvme/lv_4' |
||||||||||||||
pass | 6939018 | 2022-07-19 23:27:54 | 2022-07-20 09:26:24 | 2022-07-20 09:48:56 | 0:22:32 | 0:15:25 | 0:07:07 | smithi | main | rhel | 8.6 | orch:cephadm/smoke-roleless/{0-distro/rhel_8.6_container_tools_rhel8 0-nvme-loop 1-start 2-services/nfs2 3-final} | 2 | |
pass | 6939021 | 2022-07-19 23:27:54 | 2022-07-20 09:27:45 | 2022-07-20 09:51:06 | 0:23:21 | 0:15:45 | 0:07:36 | smithi | main | rhel | 8.6 | orch:cephadm/osds/{0-distro/rhel_8.6_container_tools_rhel8 0-nvme-loop 1-start 2-ops/rmdir-reactivate} | 2 | |
pass | 6939024 | 2022-07-19 23:27:55 | 2022-07-20 09:27:56 | 2022-07-20 09:52:57 | 0:25:01 | 0:18:53 | 0:06:08 | smithi | main | orch:cephadm/workunits/{agent/off mon_election/connectivity task/test_cephadm} | 1 | |||
pass | 6939027 | 2022-07-19 23:27:56 | 2022-07-20 09:27:56 | 2022-07-20 09:51:07 | 0:23:11 | 0:16:27 | 0:06:44 | smithi | main | centos | 8.stream | orch:cephadm/smoke/{0-distro/centos_8.stream_container_tools_crun 0-nvme-loop agent/off fixed-2 mon_election/connectivity start} | 2 | |
fail | 6939029 | 2022-07-19 23:27:57 | 2022-07-20 09:28:07 | 2022-07-20 09:47:46 | 0:19:39 | 0:11:52 | 0:07:47 | smithi | main | centos | 8.stream | orch:cephadm/thrash/{0-distro/centos_8.stream_container_tools_crun 1-start 2-thrash 3-tasks/snaps-few-objects fixed-2 msgr/async root} | 2 | |
Failure Reason:
Command failed on smithi099 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:4e24ac81906233a72da6b11568b17b2d97a920ff shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 6d482f3c-0810-11ed-842e-001a4aab830c -- ceph-volume lvm zap /dev/vg_nvme/lv_4' |
||||||||||||||
fail | 6939032 | 2022-07-19 23:27:58 | 2022-07-20 09:28:37 | 2022-07-20 09:51:10 | 0:22:33 | 0:14:22 | 0:08:11 | smithi | main | rhel | 8.6 | orch:cephadm/with-work/{0-distro/rhel_8.6_container_tools_3.0 fixed-2 mode/packaged mon_election/classic msgr/async-v2only start tasks/rados_api_tests} | 2 | |
Failure Reason:
Command failed on smithi049 with status 1: 'sudo cephadm --image quay.ceph.io/ceph-ci/ceph:4e24ac81906233a72da6b11568b17b2d97a920ff shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid da20cd6c-0810-11ed-842e-001a4aab830c -- ceph-volume lvm zap /dev/vg_nvme/lv_4' |
||||||||||||||
fail | 6939034 | 2022-07-19 23:27:59 | 2022-07-20 09:28:48 | 2022-07-20 10:12:14 | 0:43:26 | 0:31:07 | 0:12:19 | smithi | main | ubuntu | 20.04 | orch:cephadm/smoke-roleless/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-services/rgw-ingress 3-final} | 2 | |
Failure Reason:
reached maximum tries (120) after waiting for 120 seconds |
||||||||||||||
pass | 6939037 | 2022-07-19 23:28:00 | 2022-07-20 09:30:49 | 2022-07-20 10:08:39 | 0:37:50 | 0:31:17 | 0:06:33 | smithi | main | centos | 8.stream | orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn syntax} roles tasks/{0-from/pacific 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-inline/yes 4-verify} 2-client 3-upgrade-with-workload 4-verify}} | 2 | |
pass | 6939040 | 2022-07-19 23:28:01 | 2022-07-20 09:31:19 | 2022-07-20 10:25:08 | 0:53:49 | 0:46:15 | 0:07:34 | smithi | main | centos | 8.stream | orch:cephadm/upgrade/{1-start-distro/1-start-centos_8.stream_container-tools 2-repo_digest/repo_digest 3-upgrade/staggered 4-wait 5-upgrade-ls agent/off mon_election/connectivity} | 2 | |
pass | 6939043 | 2022-07-19 23:28:02 | 2022-07-20 09:32:10 | 2022-07-20 10:13:11 | 0:41:01 | 0:32:40 | 0:08:21 | smithi | main | centos | 8.stream | orch:cephadm/mgr-nfs-upgrade/{0-centos_8.stream_container_tools 1-bootstrap/16.2.5 1-start 2-nfs 3-upgrade-with-workload 4-final} | 2 | |
fail | 6939045 | 2022-07-19 23:28:03 | 2022-07-20 09:34:21 | 2022-07-20 09:55:25 | 0:21:04 | 0:14:17 | 0:06:47 | smithi | main | rhel | 8.6 | orch:cephadm/with-work/{0-distro/rhel_8.6_container_tools_rhel8 fixed-2 mode/root mon_election/connectivity msgr/async start tasks/rados_python} | 2 | |
Failure Reason:
Command failed on smithi029 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:4e24ac81906233a72da6b11568b17b2d97a920ff shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 71cab77c-0811-11ed-842e-001a4aab830c -- ceph-volume lvm zap /dev/vg_nvme/lv_4' |
||||||||||||||
pass | 6939048 | 2022-07-19 23:28:04 | 2022-07-20 09:34:21 | 2022-07-20 09:58:11 | 0:23:50 | 0:16:27 | 0:07:23 | smithi | main | centos | 8.stream | orch:cephadm/smoke-roleless/{0-distro/centos_8.stream_container_tools 0-nvme-loop 1-start 2-services/rgw 3-final} | 2 | |
fail | 6939050 | 2022-07-19 23:28:05 | 2022-07-20 09:34:52 | 2022-07-20 09:46:28 | 0:11:36 | 0:05:35 | 0:06:01 | smithi | main | orch:cephadm/workunits/{agent/on mon_election/classic task/test_cephadm_repos} | 1 | |||
Failure Reason:
Command failed (workunit test cephadm/test_repos.sh) on smithi111 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=4e24ac81906233a72da6b11568b17b2d97a920ff TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_repos.sh' |
||||||||||||||
pass | 6939053 | 2022-07-19 23:28:06 | 2022-07-20 09:34:52 | 2022-07-20 09:57:15 | 0:22:23 | 0:15:10 | 0:07:13 | smithi | main | rhel | 8.6 | orch:cephadm/osds/{0-distro/rhel_8.6_container_tools_rhel8 0-nvme-loop 1-start 2-ops/repave-all} | 2 | |
pass | 6939056 | 2022-07-19 23:28:07 | 2022-07-20 09:35:33 | 2022-07-20 10:01:11 | 0:25:38 | 0:19:27 | 0:06:11 | smithi | main | rhel | 8.6 | orch:cephadm/smoke/{0-distro/rhel_8.6_container_tools_3.0 0-nvme-loop agent/on fixed-2 mon_election/classic start} | 2 | |
pass | 6939059 | 2022-07-19 23:28:08 | 2022-07-20 09:35:53 | 2022-07-20 09:57:22 | 0:21:29 | 0:13:57 | 0:07:32 | smithi | main | centos | 8.stream | orch:cephadm/smoke-roleless/{0-distro/centos_8.stream_container_tools_crun 0-nvme-loop 1-start 2-services/basic 3-final} | 2 | |
fail | 6939061 | 2022-07-19 23:28:09 | 2022-07-20 09:36:23 | 2022-07-20 09:57:29 | 0:21:06 | 0:14:55 | 0:06:11 | smithi | main | rhel | 8.6 | orch:cephadm/thrash/{0-distro/rhel_8.6_container_tools_3.0 1-start 2-thrash 3-tasks/rados_api_tests fixed-2 msgr/async-v1only root} | 2 | |
Failure Reason:
Command failed on smithi037 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:4e24ac81906233a72da6b11568b17b2d97a920ff shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid c4447682-0811-11ed-842e-001a4aab830c -- ceph-volume lvm zap /dev/vg_nvme/lv_4' |
||||||||||||||
fail | 6939064 | 2022-07-19 23:28:10 | 2022-07-20 09:36:34 | 2022-07-20 09:57:57 | 0:21:23 | 0:11:48 | 0:09:35 | smithi | main | ubuntu | 20.04 | orch:cephadm/with-work/{0-distro/ubuntu_20.04 fixed-2 mode/packaged mon_election/classic msgr/async-v1only start tasks/rados_api_tests} | 2 | |
Failure Reason:
Command failed on smithi083 with status 1: 'sudo cephadm --image quay.ceph.io/ceph-ci/ceph:4e24ac81906233a72da6b11568b17b2d97a920ff shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid b0b846e8-0811-11ed-842e-001a4aab830c -- ceph-volume lvm zap /dev/vg_nvme/lv_4' |
||||||||||||||
pass | 6939066 | 2022-07-19 23:28:11 | 2022-07-20 09:36:34 | 2022-07-20 10:00:26 | 0:23:52 | 0:17:05 | 0:06:47 | smithi | main | rhel | 8.6 | orch:cephadm/smoke-roleless/{0-distro/rhel_8.6_container_tools_3.0 0-nvme-loop 1-start 2-services/client-keyring 3-final} | 2 | |
pass | 6939069 | 2022-07-19 23:28:12 | 2022-07-20 09:37:05 | 2022-07-20 10:18:05 | 0:41:00 | 0:32:01 | 0:08:59 | smithi | main | centos | 8.stream | orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn syntax} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-inline/no 4-verify} 2-client 3-upgrade-with-workload 4-verify}} | 2 | |
pass | 6939072 | 2022-07-19 23:28:13 | 2022-07-20 09:38:45 | 2022-07-20 10:20:10 | 0:41:25 | 0:35:08 | 0:06:17 | smithi | main | ubuntu | 20.04 | orch:cephadm/upgrade/{1-start-distro/1-start-ubuntu_20.04-15.2.9 2-repo_digest/defaut 3-upgrade/simple 4-wait 5-upgrade-ls agent/on mon_election/classic} | 2 | |
fail | 6939075 | 2022-07-19 23:28:14 | 2022-07-20 09:38:56 | 2022-07-20 09:57:52 | 0:18:56 | 0:13:11 | 0:05:45 | smithi | main | centos | 8.stream | orch:cephadm/with-work/{0-distro/centos_8.stream_container_tools fixed-2 mode/root mon_election/connectivity msgr/async-v2only start tasks/rados_python} | 2 | |
Failure Reason:
Command failed on smithi002 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:4e24ac81906233a72da6b11568b17b2d97a920ff shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid ed3e4a68-0811-11ed-842e-001a4aab830c -- ceph-volume lvm zap /dev/vg_nvme/lv_4' |
||||||||||||||
fail | 6939077 | 2022-07-19 23:28:15 | 2022-07-20 09:39:16 | 2022-07-20 09:58:08 | 0:18:52 | 0:12:05 | 0:06:47 | smithi | main | centos | 8.stream | orch:cephadm/workunits/{agent/off mon_election/connectivity task/test_iscsi_pids_limit/{centos_8.stream_container_tools test_iscsi_pids_limit}} | 1 | |
Failure Reason:
Command failed on smithi008 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:4e24ac81906233a72da6b11568b17b2d97a920ff shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid effaa314-0811-11ed-842e-001a4aab830c -- ceph-volume lvm zap /dev/vg_nvme/lv_4' |
||||||||||||||
pass | 6939080 | 2022-07-19 23:28:16 | 2022-07-20 09:39:27 | 2022-07-20 10:04:38 | 0:25:11 | 0:19:18 | 0:05:53 | smithi | main | rhel | 8.6 | orch:cephadm/orchestrator_cli/{0-random-distro$/{rhel_8.6_container_tools_3.0} 2-node-mgr agent/on orchestrator_cli} | 2 | |
fail | 6939082 | 2022-07-19 23:28:17 | 2022-07-20 09:39:37 | 2022-07-20 09:55:58 | 0:16:21 | 0:09:33 | 0:06:48 | smithi | main | centos | 8.stream | orch:cephadm/smoke-singlehost/{0-random-distro$/{centos_8.stream_container_tools_crun} 1-start 2-services/rgw 3-final} | 1 | |
Failure Reason:
Command failed on smithi006 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:4e24ac81906233a72da6b11568b17b2d97a920ff shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 810993fc-0811-11ed-842e-001a4aab830c -- ceph-volume lvm zap /dev/vg_nvme/lv_4' |
||||||||||||||
pass | 6939085 | 2022-07-19 23:28:18 | 2022-07-20 09:39:37 | 2022-07-20 10:04:07 | 0:24:30 | 0:15:53 | 0:08:37 | smithi | main | rhel | 8.6 | orch:cephadm/smoke-roleless/{0-distro/rhel_8.6_container_tools_rhel8 0-nvme-loop 1-start 2-services/iscsi 3-final} | 2 | |
pass | 6939088 | 2022-07-19 23:28:19 | 2022-07-20 09:40:58 | 2022-07-20 10:06:11 | 0:25:13 | 0:18:00 | 0:07:13 | smithi | main | rhel | 8.6 | orch:cephadm/smoke/{0-distro/rhel_8.6_container_tools_rhel8 0-nvme-loop agent/off fixed-2 mon_election/connectivity start} | 2 | |
fail | 6939091 | 2022-07-19 23:28:20 | 2022-07-20 09:41:28 | 2022-07-20 10:02:47 | 0:21:19 | 0:13:51 | 0:07:28 | smithi | main | rhel | 8.6 | orch:cephadm/thrash/{0-distro/rhel_8.6_container_tools_rhel8 1-start 2-thrash 3-tasks/radosbench fixed-2 msgr/async-v2only root} | 2 | |
Failure Reason:
Command failed on smithi153 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:4e24ac81906233a72da6b11568b17b2d97a920ff shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 71b8fd7e-0812-11ed-842e-001a4aab830c -- ceph-volume lvm zap /dev/vg_nvme/lv_4' |
||||||||||||||
fail | 6939093 | 2022-07-19 23:28:21 | 2022-07-20 09:41:29 | 2022-07-20 10:01:56 | 0:20:27 | 0:12:18 | 0:08:09 | smithi | main | centos | 8.stream | orch:cephadm/with-work/{0-distro/centos_8.stream_container_tools_crun fixed-2 mode/packaged mon_election/classic msgr/async start tasks/rados_api_tests} | 2 | |
Failure Reason:
Command failed on smithi149 with status 1: 'sudo cephadm --image quay.ceph.io/ceph-ci/ceph:4e24ac81906233a72da6b11568b17b2d97a920ff shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 6ec40758-0812-11ed-842e-001a4aab830c -- ceph-volume lvm zap /dev/vg_nvme/lv_4' |
||||||||||||||
fail | 6939095 | 2022-07-19 23:28:22 | 2022-07-20 09:42:49 | 2022-07-20 10:23:22 | 0:40:33 | 0:30:13 | 0:10:20 | smithi | main | ubuntu | 20.04 | orch:cephadm/osds/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-ops/rm-zap-add} | 2 | |
Failure Reason:
reached maximum tries (120) after waiting for 120 seconds |
||||||||||||||
fail | 6939098 | 2022-07-19 23:28:23 | 2022-07-20 09:43:50 | 2022-07-20 10:01:08 | 0:17:18 | 0:10:15 | 0:07:03 | smithi | main | orch:cephadm/workunits/{agent/on mon_election/classic task/test_nfs} | 1 | |||
Failure Reason:
Command failed on smithi156 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:4e24ac81906233a72da6b11568b17b2d97a920ff shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 29b27be0-0812-11ed-842e-001a4aab830c -- ceph-volume lvm zap /dev/vg_nvme/lv_4' |
||||||||||||||
fail | 6939101 | 2022-07-19 23:28:24 | 2022-07-20 09:43:50 | 2022-07-20 10:25:28 | 0:41:38 | 0:31:08 | 0:10:30 | smithi | main | ubuntu | 20.04 | orch:cephadm/smoke-roleless/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-services/jaeger 3-final} | 2 | |
Failure Reason:
reached maximum tries (120) after waiting for 120 seconds |
||||||||||||||
fail | 6939104 | 2022-07-19 23:28:25 | 2022-07-20 09:43:51 | 2022-07-20 10:04:45 | 0:20:54 | 0:14:38 | 0:06:16 | smithi | main | rhel | 8.6 | orch:cephadm/with-work/{0-distro/rhel_8.6_container_tools_3.0 fixed-2 mode/root mon_election/connectivity msgr/async-v1only start tasks/rados_python} | 2 | |
Failure Reason:
Command failed on smithi055 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:4e24ac81906233a72da6b11568b17b2d97a920ff shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid c7baffba-0812-11ed-842e-001a4aab830c -- ceph-volume lvm zap /dev/vg_nvme/lv_4' |
||||||||||||||
pass | 6939106 | 2022-07-19 23:28:26 | 2022-07-20 09:43:51 | 2022-07-20 10:24:35 | 0:40:44 | 0:33:02 | 0:07:42 | smithi | main | centos | 8.stream | orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn syntax} roles tasks/{0-from/pacific 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-inline/yes 4-verify} 2-client 3-upgrade-with-workload 4-verify}} | 2 | |
pass | 6939109 | 2022-07-19 23:28:27 | 2022-07-20 09:44:01 | 2022-07-20 10:47:49 | 1:03:48 | 0:56:01 | 0:07:47 | smithi | main | ubuntu | 20.04 | orch:cephadm/upgrade/{1-start-distro/1-start-ubuntu_20.04 2-repo_digest/repo_digest 3-upgrade/staggered 4-wait 5-upgrade-ls agent/off mon_election/connectivity} | 2 | |
pass | 6939111 | 2022-07-19 23:28:27 | 2022-07-20 09:44:22 | 2022-07-20 10:07:35 | 0:23:13 | 0:15:42 | 0:07:31 | smithi | main | centos | 8.stream | orch:cephadm/smoke-roleless/{0-distro/centos_8.stream_container_tools 0-nvme-loop 1-start 2-services/mirror 3-final} | 2 | |
pass | 6939114 | 2022-07-19 23:28:28 | 2022-07-20 09:44:42 | 2022-07-20 10:21:02 | 0:36:20 | 0:25:01 | 0:11:19 | smithi | main | ubuntu | 20.04 | orch:cephadm/smoke/{0-distro/ubuntu_20.04 0-nvme-loop agent/on fixed-2 mon_election/classic start} | 2 | |
fail | 6939117 | 2022-07-19 23:28:29 | 2022-07-20 09:45:33 | 2022-07-20 10:07:18 | 0:21:45 | 0:11:18 | 0:10:27 | smithi | main | ubuntu | 20.04 | orch:cephadm/thrash/{0-distro/ubuntu_20.04 1-start 2-thrash 3-tasks/small-objects fixed-2 msgr/async root} | 2 | |
Failure Reason:
Command failed on smithi022 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:4e24ac81906233a72da6b11568b17b2d97a920ff shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f37f68ca-0812-11ed-842e-001a4aab830c -- ceph-volume lvm zap /dev/vg_nvme/lv_4' |
||||||||||||||
fail | 6939120 | 2022-07-19 23:28:30 | 2022-07-20 09:45:53 | 2022-07-20 10:07:13 | 0:21:20 | 0:13:57 | 0:07:23 | smithi | main | rhel | 8.6 | orch:cephadm/with-work/{0-distro/rhel_8.6_container_tools_rhel8 fixed-2 mode/packaged mon_election/classic msgr/async-v2only start tasks/rados_api_tests} | 2 | |
Failure Reason:
Command failed on smithi019 with status 1: 'sudo cephadm --image quay.ceph.io/ceph-ci/ceph:4e24ac81906233a72da6b11568b17b2d97a920ff shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 1580209a-0813-11ed-842e-001a4aab830c -- ceph-volume lvm zap /dev/vg_nvme/lv_4' |
||||||||||||||
fail | 6939122 | 2022-07-19 23:28:31 | 2022-07-20 09:46:14 | 2022-07-20 10:03:39 | 0:17:25 | 0:10:14 | 0:07:11 | smithi | main | orch:cephadm/workunits/{agent/off mon_election/connectivity task/test_orch_cli} | 1 | |||
Failure Reason:
Command failed on smithi057 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:4e24ac81906233a72da6b11568b17b2d97a920ff shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 83f626ce-0812-11ed-842e-001a4aab830c -- ceph-volume lvm zap /dev/vg_nvme/lv_4' |
||||||||||||||
pass | 6939124 | 2022-07-19 23:28:32 | 2022-07-20 09:46:14 | 2022-07-20 10:30:17 | 0:44:03 | 0:37:15 | 0:06:48 | smithi | main | centos | 8.stream | orch:cephadm/mgr-nfs-upgrade/{0-centos_8.stream_container_tools 1-bootstrap/octopus 1-start 2-nfs 3-upgrade-with-workload 4-final} | 2 | |
pass | 6939127 | 2022-07-19 23:28:33 | 2022-07-20 09:47:05 | 2022-07-20 10:10:48 | 0:23:43 | 0:16:27 | 0:07:16 | smithi | main | centos | 8.stream | orch:cephadm/smoke-roleless/{0-distro/centos_8.stream_container_tools_crun 0-nvme-loop 1-start 2-services/nfs-ingress-rgw-bucket 3-final} | 2 | |
dead | 6939130 | 2022-07-19 23:28:34 | 2022-07-20 09:47:56 | 2022-07-20 21:59:01 | 12:11:05 | smithi | main | centos | 8.stream | orch:cephadm/osds/{0-distro/centos_8.stream_container_tools 0-nvme-loop 1-start 2-ops/rm-zap-flag} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 6939133 | 2022-07-19 23:28:35 | 2022-07-20 09:48:56 | 2022-07-20 10:11:06 | 0:22:10 | 0:12:18 | 0:09:52 | smithi | main | ubuntu | 20.04 | orch:cephadm/with-work/{0-distro/ubuntu_20.04 fixed-2 mode/root mon_election/connectivity msgr/async start tasks/rados_python} | 2 | |
Failure Reason:
Command failed on smithi106 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:4e24ac81906233a72da6b11568b17b2d97a920ff shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid a2cb3d04-0813-11ed-842e-001a4aab830c -- ceph-volume lvm zap /dev/vg_nvme/lv_4' |
||||||||||||||
pass | 6939136 | 2022-07-19 23:28:36 | 2022-07-20 09:49:37 | 2022-07-20 10:17:07 | 0:27:30 | 0:19:55 | 0:07:35 | smithi | main | rhel | 8.6 | orch:cephadm/smoke-roleless/{0-distro/rhel_8.6_container_tools_3.0 0-nvme-loop 1-start 2-services/nfs-ingress-rgw-user 3-final} | 2 | |
pass | 6939138 | 2022-07-19 23:28:37 | 2022-07-20 09:49:57 | 2022-07-20 10:30:11 | 0:40:14 | 0:32:14 | 0:08:00 | smithi | main | centos | 8.stream | orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn syntax} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-inline/no 4-verify} 2-client 3-upgrade-with-workload 4-verify}} | 2 | |
pass | 6939140 | 2022-07-19 23:28:38 | 2022-07-20 09:50:58 | 2022-07-20 10:14:03 | 0:23:05 | 0:16:53 | 0:06:12 | smithi | main | centos | 8.stream | orch:cephadm/smoke/{0-distro/centos_8.stream_container_tools 0-nvme-loop agent/off fixed-2 mon_election/connectivity start} | 2 | |
fail | 6939143 | 2022-07-19 23:28:39 | 2022-07-20 09:51:08 | 2022-07-20 10:10:28 | 0:19:20 | 0:12:52 | 0:06:28 | smithi | main | centos | 8.stream | orch:cephadm/thrash/{0-distro/centos_8.stream_container_tools 1-start 2-thrash 3-tasks/snaps-few-objects fixed-2 msgr/async-v1only root} | 2 | |
Failure Reason:
Command failed on smithi197 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:4e24ac81906233a72da6b11568b17b2d97a920ff shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid b530926e-0813-11ed-842e-001a4aab830c -- ceph-volume lvm zap /dev/vg_nvme/lv_4' |
||||||||||||||
pass | 6939146 | 2022-07-19 23:28:40 | 2022-07-20 09:51:09 | 2022-07-20 10:44:17 | 0:53:08 | 0:47:28 | 0:05:40 | smithi | main | centos | 8.stream | orch:cephadm/upgrade/{1-start-distro/1-start-centos_8.stream_container-tools 2-repo_digest/defaut 3-upgrade/staggered 4-wait 5-upgrade-ls agent/off mon_election/classic} | 2 | |
fail | 6939149 | 2022-07-19 23:28:41 | 2022-07-20 09:51:09 | 2022-07-20 10:11:38 | 0:20:29 | 0:13:02 | 0:07:27 | smithi | main | centos | 8.stream | orch:cephadm/with-work/{0-distro/centos_8.stream_container_tools fixed-2 mode/packaged mon_election/connectivity msgr/async-v1only start tasks/rados_api_tests} | 2 | |
Failure Reason:
Command failed on smithi049 with status 1: 'sudo cephadm --image quay.ceph.io/ceph-ci/ceph:4e24ac81906233a72da6b11568b17b2d97a920ff shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid e362143c-0813-11ed-842e-001a4aab830c -- ceph-volume lvm zap /dev/vg_nvme/lv_4' |
||||||||||||||
pass | 6939151 | 2022-07-19 23:28:42 | 2022-07-20 09:51:20 | 2022-07-20 10:08:46 | 0:17:26 | 0:10:59 | 0:06:27 | smithi | main | orch:cephadm/workunits/{agent/on mon_election/connectivity task/test_adoption} | 1 | |||
pass | 6939153 | 2022-07-19 23:28:43 | 2022-07-20 09:51:20 | 2022-07-20 10:17:38 | 0:26:18 | 0:18:48 | 0:07:30 | smithi | main | rhel | 8.6 | orch:cephadm/smoke-roleless/{0-distro/rhel_8.6_container_tools_rhel8 0-nvme-loop 1-start 2-services/nfs-ingress 3-final} | 2 | |
fail | 6939156 | 2022-07-19 23:28:44 | 2022-07-20 09:52:41 | 2022-07-20 10:12:01 | 0:19:20 | 0:13:04 | 0:06:16 | smithi | main | centos | 8.stream | orch:cephadm/with-work/{0-distro/centos_8.stream_container_tools_crun fixed-2 mode/root mon_election/classic msgr/async-v2only start tasks/rados_python} | 2 | |
Failure Reason:
Command failed on smithi045 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:4e24ac81906233a72da6b11568b17b2d97a920ff shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid efd67b68-0813-11ed-842e-001a4aab830c -- ceph-volume lvm zap /dev/vg_nvme/lv_4' |
||||||||||||||
fail | 6939159 | 2022-07-19 23:28:45 | 2022-07-20 09:53:01 | 2022-07-20 10:15:26 | 0:22:25 | 0:14:21 | 0:08:04 | smithi | main | centos | 8.stream | orch:cephadm/osds/{0-distro/centos_8.stream_container_tools_crun 0-nvme-loop 1-start 2-ops/rm-zap-wait} | 2 | |
Failure Reason:
Command failed on smithi052 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:4e24ac81906233a72da6b11568b17b2d97a920ff shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid a91b19cc-0813-11ed-842e-001a4aab830c -- bash -c \'set -e\nset -x\nceph orch ps\nceph orch device ls\nDEVID=$(ceph device ls | grep osd.1 | awk \'"\'"\'{print $1}\'"\'"\')\nHOST=$(ceph orch device ls | grep $DEVID | awk \'"\'"\'{print $1}\'"\'"\')\nDEV=$(ceph orch device ls | grep $DEVID | awk \'"\'"\'{print $2}\'"\'"\')\necho "host $HOST, dev $DEV, devid $DEVID"\nceph orch osd rm 1\nwhile ceph orch osd rm status | grep ^1 ; do sleep 5 ; done\nceph orch device zap $HOST $DEV --force\nwhile ! ceph osd dump | grep osd.1 | grep up ; do sleep 5 ; done\n\'' |
||||||||||||||
fail | 6939162 | 2022-07-19 23:28:46 | 2022-07-20 09:54:12 | 2022-07-20 10:34:04 | 0:39:52 | 0:29:45 | 0:10:07 | smithi | main | ubuntu | 20.04 | orch:cephadm/smoke-roleless/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-services/nfs-ingress2 3-final} | 2 | |
Failure Reason:
reached maximum tries (120) after waiting for 120 seconds |
||||||||||||||
pass | 6939164 | 2022-07-19 23:28:47 | 2022-07-20 09:54:32 | 2022-07-20 10:20:00 | 0:25:28 | 0:18:29 | 0:06:59 | smithi | main | orch:cephadm/workunits/{agent/off mon_election/classic task/test_cephadm} | 1 | |||
pass | 6939167 | 2022-07-19 23:28:47 | 2022-07-20 09:54:33 | 2022-07-20 10:17:19 | 0:22:46 | 0:16:38 | 0:06:08 | smithi | main | centos | 8.stream | orch:cephadm/smoke/{0-distro/centos_8.stream_container_tools_crun 0-nvme-loop agent/on fixed-2 mon_election/classic start} | 2 | |
fail | 6939169 | 2022-07-19 23:28:48 | 2022-07-20 09:54:33 | 2022-07-20 10:14:26 | 0:19:53 | 0:12:35 | 0:07:18 | smithi | main | centos | 8.stream | orch:cephadm/thrash/{0-distro/centos_8.stream_container_tools_crun 1-start 2-thrash 3-tasks/rados_api_tests fixed-2 msgr/async-v2only root} | 2 | |
Failure Reason:
Command failed on smithi029 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:4e24ac81906233a72da6b11568b17b2d97a920ff shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 397eae84-0814-11ed-842e-001a4aab830c -- ceph-volume lvm zap /dev/vg_nvme/lv_4' |
||||||||||||||
fail | 6939172 | 2022-07-19 23:28:49 | 2022-07-20 09:55:34 | 2022-07-20 10:16:27 | 0:20:53 | 0:15:05 | 0:05:48 | smithi | main | rhel | 8.6 | orch:cephadm/with-work/{0-distro/rhel_8.6_container_tools_3.0 fixed-2 mode/packaged mon_election/connectivity msgr/async start tasks/rados_api_tests} | 2 | |
Failure Reason:
Command failed on smithi006 with status 1: 'sudo cephadm --image quay.ceph.io/ceph-ci/ceph:4e24ac81906233a72da6b11568b17b2d97a920ff shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 77c52268-0814-11ed-842e-001a4aab830c -- ceph-volume lvm zap /dev/vg_nvme/lv_4' |
||||||||||||||
pass | 6939175 | 2022-07-19 23:28:50 | 2022-07-20 09:56:04 | 2022-07-20 10:18:03 | 0:21:59 | 0:14:55 | 0:07:04 | smithi | main | centos | 8.stream | orch:cephadm/smoke-roleless/{0-distro/centos_8.stream_container_tools 0-nvme-loop 1-start 2-services/nfs 3-final} | 2 | |
pass | 6939178 | 2022-07-19 23:28:51 | 2022-07-20 09:56:44 | 2022-07-20 10:36:30 | 0:39:46 | 0:32:05 | 0:07:41 | smithi | main | centos | 8.stream | orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn syntax} roles tasks/{0-from/pacific 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-inline/yes 4-verify} 2-client 3-upgrade-with-workload 4-verify}} | 2 | |
pass | 6939180 | 2022-07-19 23:28:52 | 2022-07-20 09:57:25 | 2022-07-20 10:39:30 | 0:42:05 | 0:34:06 | 0:07:59 | smithi | main | ubuntu | 20.04 | orch:cephadm/upgrade/{1-start-distro/1-start-ubuntu_20.04-15.2.9 2-repo_digest/repo_digest 3-upgrade/simple 4-wait 5-upgrade-ls agent/on mon_election/connectivity} | 2 | |
fail | 6939182 | 2022-07-19 23:28:53 | 2022-07-20 09:57:26 | 2022-07-20 10:18:29 | 0:21:03 | 0:13:44 | 0:07:19 | smithi | main | rhel | 8.6 | orch:cephadm/with-work/{0-distro/rhel_8.6_container_tools_rhel8 fixed-2 mode/root mon_election/classic msgr/async-v1only start tasks/rados_python} | 2 | |
Failure Reason:
Command failed on smithi037 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:4e24ac81906233a72da6b11568b17b2d97a920ff shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid a3b949c6-0814-11ed-842e-001a4aab830c -- ceph-volume lvm zap /dev/vg_nvme/lv_4' |
||||||||||||||
pass | 6939183 | 2022-07-19 23:28:54 | 2022-07-20 09:57:36 | 2022-07-20 10:18:45 | 0:21:09 | 0:15:17 | 0:05:52 | smithi | main | centos | 8.stream | orch:cephadm/smoke-roleless/{0-distro/centos_8.stream_container_tools_crun 0-nvme-loop 1-start 2-services/nfs2 3-final} | 2 | |
fail | 6939185 | 2022-07-19 23:28:55 | 2022-07-20 09:57:57 | 2022-07-20 10:09:17 | 0:11:20 | 0:05:37 | 0:05:43 | smithi | main | orch:cephadm/workunits/{agent/on mon_election/connectivity task/test_cephadm_repos} | 1 | |||
Failure Reason:
Command failed (workunit test cephadm/test_repos.sh) on smithi085 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=4e24ac81906233a72da6b11568b17b2d97a920ff TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_repos.sh' |