User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Pass | Fail | Dead |
---|---|---|---|---|---|---|---|---|---|---|---|
phlogistonjohn | 2023-11-30 21:17:15 | 2023-12-01 05:02:30 | 2023-12-01 06:23:50 | 1:21:20 | orch:cephadm | wip-phlogistonjohn-testing-2023-11-30-1010 | smithi | 875037c | 1 | 88 | 2 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
fail | 7472913 | 2023-11-30 21:17:33 | 2023-12-01 05:02:30 | 2023-12-01 05:46:13 | 0:43:43 | 0:32:11 | 0:11:32 | smithi | main | centos | 8.stream | orch:cephadm/upgrade/{1-start-distro/1-start-centos_8.stream_container-tools 2-repo_digest/defaut 3-upgrade/staggered 4-wait 5-upgrade-ls agent/off mon_election/classic} | 2 | |
Failure Reason:
Command failed on smithi077 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 6d85403e-9009-11ee-95a2-87774f69a715 -e sha1=875037c4c0585d32b171b61f9b83f482e6b73b3d -- bash -c \'ceph versions | jq -e \'"\'"\'.mgr | length == 1\'"\'"\'\'' |
||||||||||||||
fail | 7472914 | 2023-11-30 21:17:34 | 2023-12-01 05:02:41 | 2023-12-01 05:24:15 | 0:21:34 | 0:09:18 | 0:12:16 | smithi | main | centos | 8.stream | orch:cephadm/workunits/{0-distro/centos_8.stream_container_tools agent/on mon_election/connectivity task/test_cephadm_repos} | 1 | |
Failure Reason:
Command failed (workunit test cephadm/test_repos.sh) on smithi145 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=875037c4c0585d32b171b61f9b83f482e6b73b3d TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_repos.sh' |
||||||||||||||
fail | 7472915 | 2023-11-30 21:17:35 | 2023-11-30 22:59:51 | 2023-11-30 23:16:49 | 0:16:58 | 0:08:00 | 0:08:58 | smithi | main | centos | 8.stream | orch:cephadm/smoke-roleless/{0-distro/centos_8.stream_container_tools_crun 0-nvme-loop 1-start 2-services/nfs-haproxy-proto 3-final} | 2 | |
Failure Reason:
Command failed on smithi129 with status 1: 'sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid 66d8114a-8fd6-11ee-95a2-87774f69a715 --force' |
||||||||||||||
dead | 7472916 | 2023-11-30 21:17:36 | 2023-12-01 05:05:21 | 2023-12-01 06:06:18 | 1:00:57 | smithi | main | centos | 8.stream | orch:cephadm/mgr-nfs-upgrade/{0-centos_8.stream_container_tools 1-bootstrap/17.2.0 1-start 2-nfs 3-upgrade-with-workload 4-final} | 2 | |||
Failure Reason:
Error reimaging machines: reached maximum tries (241) after waiting for 3600 seconds |
||||||||||||||
fail | 7472917 | 2023-11-30 21:17:37 | 2023-12-01 05:06:02 | 2023-12-01 05:26:39 | 0:20:37 | 0:09:46 | 0:10:51 | smithi | main | ubuntu | 22.04 | orch:cephadm/nfs/{cluster/{1-node} overrides/ignorelist_health supported-random-distros$/{ubuntu_latest} tasks/nfs} | 1 | |
Failure Reason:
Command failed on smithi161 with status 1: 'sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid 17752866-900a-11ee-95a2-87774f69a715 --force' |
||||||||||||||
fail | 7472918 | 2023-11-30 21:17:38 | 2023-12-01 05:06:52 | 2023-12-01 05:27:44 | 0:20:52 | 0:12:07 | 0:08:45 | smithi | main | rhel | 8.6 | orch:cephadm/orchestrator_cli/{0-random-distro$/{rhel_8.6_container_tools_3.0} 2-node-mgr agent/off orchestrator_cli} | 2 | |
Failure Reason:
Command failed on smithi158 with status 1: 'sudo yum -y install ceph' |
||||||||||||||
fail | 7472919 | 2023-11-30 21:17:39 | 2023-12-01 05:08:03 | 2023-12-01 05:27:48 | 0:19:45 | 0:08:35 | 0:11:10 | smithi | main | centos | 8.stream | orch:cephadm/rbd_iscsi/{0-single-container-host base/install cluster/{fixed-3 openstack} conf/{disable-pool-app} workloads/cephadm_iscsi} | 3 | |
Failure Reason:
Command failed on smithi039 with status 1: 'sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid 3442b364-900a-11ee-95a2-87774f69a715 --force' |
||||||||||||||
fail | 7472920 | 2023-11-30 21:17:39 | 2023-12-01 05:08:03 | 2023-12-01 05:24:17 | 0:16:14 | 0:06:09 | 0:10:05 | smithi | main | ubuntu | 20.04 | orch:cephadm/smoke-singlehost/{0-random-distro$/{ubuntu_20.04} 1-start 2-services/basic 3-final} | 1 | |
Failure Reason:
Command failed on smithi070 with status 1: 'sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid bec17332-9009-11ee-95a2-87774f69a715 --force' |
||||||||||||||
fail | 7472921 | 2023-11-30 21:17:40 | 2023-12-01 05:08:23 | 2023-12-01 05:27:39 | 0:19:16 | 0:09:01 | 0:10:15 | smithi | main | centos | 8.stream | orch:cephadm/smoke-small/{0-distro/centos_8.stream_container_tools_crun 0-nvme-loop agent/off fixed-2 mon_election/classic start} | 3 | |
Failure Reason:
Command failed on smithi053 with status 1: 'sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid 1fdfa77e-900a-11ee-95a2-87774f69a715 --force' |
||||||||||||||
fail | 7472922 | 2023-11-30 21:17:41 | 2023-12-01 05:08:34 | 2023-12-01 05:28:16 | 0:19:42 | 0:11:53 | 0:07:49 | smithi | main | rhel | 8.6 | orch:cephadm/thrash/{0-distro/rhel_8.6_container_tools_rhel8 1-start 2-thrash 3-tasks/rados_api_tests fixed-2 msgr/async root} | 2 | |
Failure Reason:
Command failed on smithi076 with status 1: 'sudo yum -y install ceph' |
||||||||||||||
fail | 7472923 | 2023-11-30 21:17:42 | 2023-12-01 05:08:44 | 2023-12-01 05:28:11 | 0:19:27 | 0:12:03 | 0:07:24 | smithi | main | rhel | 8.6 | orch:cephadm/with-work/{0-distro/rhel_8.6_container_tools_3.0 fixed-2 mode/packaged mon_election/classic msgr/async-v1only start tasks/rados_api_tests} | 2 | |
Failure Reason:
Command failed on smithi125 with status 1: 'sudo yum -y install ceph' |
||||||||||||||
fail | 7472924 | 2023-11-30 21:17:43 | 2023-12-01 05:08:55 | 2023-12-01 05:30:55 | 0:22:00 | 0:12:05 | 0:09:55 | smithi | main | centos | 8.stream | orch:cephadm/workunits/{0-distro/centos_8.stream_container_tools_crun agent/off mon_election/classic task/test_extra_daemon_features} | 2 | |
Failure Reason:
Command failed on smithi116 with status 1: 'sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid bb07ec66-900a-11ee-95a2-87774f69a715 --force' |
||||||||||||||
fail | 7472925 | 2023-11-30 21:17:44 | 2023-12-01 05:09:05 | 2023-12-01 05:26:41 | 0:17:36 | 0:10:47 | 0:06:49 | smithi | main | rhel | 8.6 | orch:cephadm/smoke-roleless/{0-distro/rhel_8.6_container_tools_3.0 0-nvme-loop 1-start 2-services/nfs-ingress-rgw-bucket 3-final} | 2 | |
Failure Reason:
Command failed on smithi146 with status 1: 'sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid 0ea58398-900a-11ee-95a2-87774f69a715 --force' |
||||||||||||||
fail | 7472926 | 2023-11-30 21:17:44 | 2023-12-01 05:09:15 | 2023-12-01 05:28:23 | 0:19:08 | 0:08:57 | 0:10:11 | smithi | main | centos | 8.stream | orch:cephadm/osds/{0-distro/centos_8.stream_container_tools_crun 0-nvme-loop 1-start 2-ops/rm-zap-flag} | 2 | |
Failure Reason:
Command failed on smithi032 with status 1: 'sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid 2d90b052-900a-11ee-95a2-87774f69a715 --force' |
||||||||||||||
fail | 7472927 | 2023-11-30 21:17:45 | 2023-12-01 05:09:16 | 2023-12-01 05:28:24 | 0:19:08 | 0:10:53 | 0:08:15 | smithi | main | rhel | 8.6 | orch:cephadm/smoke/{0-distro/rhel_8.6_container_tools_3.0 0-nvme-loop agent/on fixed-2 mon_election/connectivity start} | 2 | |
Failure Reason:
Command failed on smithi139 with status 1: 'sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid 4fb0a48a-900a-11ee-95a2-87774f69a715 --force' |
||||||||||||||
fail | 7472928 | 2023-11-30 21:17:46 | 2023-12-01 05:11:06 | 2023-12-01 05:31:03 | 0:19:57 | 0:12:41 | 0:07:16 | smithi | main | rhel | 8.6 | orch:cephadm/workunits/{0-distro/rhel_8.6_container_tools_3.0 agent/on mon_election/connectivity task/test_host_drain} | 3 | |
Failure Reason:
Command failed on smithi154 with status 1: 'sudo yum -y install ceph' |
||||||||||||||
fail | 7472929 | 2023-11-30 21:17:47 | 2023-12-01 05:11:27 | 2023-12-01 05:32:45 | 0:21:18 | 0:10:28 | 0:10:50 | smithi | main | rhel | 8.6 | orch:cephadm/smoke-roleless/{0-distro/rhel_8.6_container_tools_rhel8 0-nvme-loop 1-start 2-services/nfs-ingress-rgw-user 3-final} | 2 | |
Failure Reason:
Command failed on smithi008 with status 1: 'sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid dc8b7268-900a-11ee-95a2-87774f69a715 --force' |
||||||||||||||
fail | 7472930 | 2023-11-30 21:17:48 | 2023-12-01 05:15:38 | 2023-12-01 05:33:46 | 0:18:08 | 0:10:18 | 0:07:50 | smithi | main | rhel | 8.6 | orch:cephadm/with-work/{0-distro/rhel_8.6_container_tools_rhel8 fixed-2 mode/root mon_election/connectivity msgr/async-v2only start tasks/rados_python} | 2 | |
Failure Reason:
Command failed on smithi176 with status 1: 'sudo yum -y install ceph-radosgw ceph-test ceph ceph-base cephadm ceph-immutable-object-cache ceph-mgr ceph-mgr-dashboard ceph-mgr-diskprediction-local ceph-mgr-rook ceph-mgr-cephadm ceph-fuse ceph-volume librados-devel libcephfs2 libcephfs-devel librados2 librbd1 python3-rados python3-rgw python3-cephfs python3-rbd rbd-fuse rbd-mirror rbd-nbd python3-pytest python3-pytest python3-pytest python3-pytest' |
||||||||||||||
fail | 7472931 | 2023-11-30 21:17:49 | 2023-12-01 05:16:38 | 2023-12-01 05:56:35 | 0:39:57 | 0:29:35 | 0:10:22 | smithi | main | centos | 8.stream | orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} fail_fs/no overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn syntax} roles tasks/{0-from/quincy 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-inline/no 4-verify} 2-client 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} | 2 | |
Failure Reason:
Command failed on smithi062 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:quincy shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 92f10ca6-900c-11ee-95a2-87774f69a715 -e sha1=875037c4c0585d32b171b61f9b83f482e6b73b3d -- bash -c \'ceph versions | jq -e \'"\'"\'.overall | length == 1\'"\'"\'\'' |
||||||||||||||
fail | 7472932 | 2023-11-30 21:17:50 | 2023-12-01 05:16:49 | 2023-12-01 05:37:08 | 0:20:19 | 0:06:19 | 0:14:00 | smithi | main | ubuntu | 20.04 | orch:cephadm/smoke-roleless/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-services/nfs-ingress 3-final} | 2 | |
Failure Reason:
Command failed on smithi027 with status 1: 'sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid 79564bf4-900b-11ee-95a2-87774f69a715 --force' |
||||||||||||||
fail | 7472933 | 2023-11-30 21:17:50 | 2023-12-01 05:17:29 | 2023-12-01 05:38:18 | 0:20:49 | 0:10:11 | 0:10:38 | smithi | main | ubuntu | 20.04 | orch:cephadm/thrash/{0-distro/ubuntu_20.04 1-start 2-thrash 3-tasks/radosbench fixed-2 msgr/async-v1only root} | 2 | |
Failure Reason:
Command failed on smithi106 with status 1: 'sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid b53ac370-900b-11ee-95a2-87774f69a715 --force' |
||||||||||||||
fail | 7472934 | 2023-11-30 21:17:51 | 2023-12-01 05:18:10 | 2023-12-01 05:43:04 | 0:24:54 | 0:14:28 | 0:10:26 | smithi | main | centos | 8.stream | orch:cephadm/workunits/{0-distro/rhel_8.6_container_tools_rhel8 agent/off mon_election/classic task/test_iscsi_container/{centos_8.stream_container_tools test_iscsi_container}} | 1 | |
Failure Reason:
Command failed on smithi043 with status 1: 'sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid 3b77a462-900c-11ee-95a2-87774f69a715 --force' |
||||||||||||||
fail | 7472935 | 2023-11-30 21:17:52 | 2023-12-01 05:18:20 | 2023-12-01 05:38:33 | 0:20:13 | 0:09:47 | 0:10:26 | smithi | main | ubuntu | 20.04 | orch:cephadm/with-work/{0-distro/ubuntu_20.04 fixed-2 mode/packaged mon_election/classic msgr/async start tasks/rotate-keys} | 2 | |
Failure Reason:
Command failed on smithi005 with status 1: 'sudo cephadm rm-cluster --fsid b51e280a-900b-11ee-95a2-87774f69a715 --force' |
||||||||||||||
fail | 7472936 | 2023-11-30 21:17:53 | 2023-12-01 05:18:40 | 2023-12-01 05:38:19 | 0:19:39 | 0:08:25 | 0:11:14 | smithi | main | centos | 8.stream | orch:cephadm/smoke-roleless/{0-distro/centos_8.stream_container_tools 0-nvme-loop 1-start 2-services/nfs-ingress2 3-final} | 2 | |
Failure Reason:
Command failed on smithi102 with status 1: 'sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid 85baa354-900b-11ee-95a2-87774f69a715 --force' |
||||||||||||||
fail | 7472937 | 2023-11-30 21:17:54 | 2023-12-01 05:19:11 | 2023-12-01 05:55:31 | 0:36:20 | 0:26:42 | 0:09:38 | smithi | main | ubuntu | 20.04 | orch:cephadm/upgrade/{1-start-distro/1-start-ubuntu_20.04 2-repo_digest/repo_digest 3-upgrade/simple 4-wait 5-upgrade-ls agent/on mon_election/connectivity} | 2 | |
Failure Reason:
Command failed on smithi071 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 4d67a538-900b-11ee-95a2-87774f69a715 -e sha1=875037c4c0585d32b171b61f9b83f482e6b73b3d -- bash -c \'ceph versions | jq -e \'"\'"\'.overall | length == 1\'"\'"\'\'' |
||||||||||||||
fail | 7472938 | 2023-11-30 21:17:55 | 2023-12-01 05:19:31 | 2023-12-01 05:42:24 | 0:22:53 | 0:08:37 | 0:14:16 | smithi | main | ubuntu | 20.04 | orch:cephadm/workunits/{0-distro/ubuntu_20.04 agent/on mon_election/connectivity task/test_monitoring_stack_basic} | 3 | |
Failure Reason:
Command failed on smithi023 with status 1: 'sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid 29f22514-900c-11ee-95a2-87774f69a715 --force' |
||||||||||||||
fail | 7472939 | 2023-11-30 21:17:56 | 2023-12-01 05:21:12 | 2023-12-01 05:39:40 | 0:18:28 | 0:10:32 | 0:07:56 | smithi | main | rhel | 8.6 | orch:cephadm/osds/{0-distro/rhel_8.6_container_tools_3.0 0-nvme-loop 1-start 2-ops/rm-zap-wait} | 2 | |
Failure Reason:
Command failed on smithi064 with status 1: 'sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid d6513760-900b-11ee-95a2-87774f69a715 --force' |
||||||||||||||
fail | 7472940 | 2023-11-30 21:17:57 | 2023-12-01 05:22:32 | 2023-12-01 05:40:06 | 0:17:34 | 0:09:51 | 0:07:43 | smithi | main | rhel | 8.6 | orch:cephadm/smoke/{0-distro/rhel_8.6_container_tools_rhel8 0-nvme-loop agent/off fixed-2 mon_election/classic start} | 2 | |
Failure Reason:
Command failed on smithi006 with status 1: 'sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid cde8550e-900b-11ee-95a2-87774f69a715 --force' |
||||||||||||||
fail | 7472941 | 2023-11-30 21:17:57 | 2023-12-01 05:23:23 | 2023-12-01 05:41:18 | 0:17:55 | 0:08:15 | 0:09:40 | smithi | main | centos | 8.stream | orch:cephadm/smoke-roleless/{0-distro/centos_8.stream_container_tools_crun 0-nvme-loop 1-start 2-services/nfs-keepalive-only 3-final} | 2 | |
Failure Reason:
Command failed on smithi070 with status 1: 'sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid 2bca1540-900c-11ee-95a2-87774f69a715 --force' |
||||||||||||||
fail | 7472942 | 2023-11-30 21:17:58 | 2023-12-01 05:24:23 | 2023-12-01 05:43:38 | 0:19:15 | 0:09:10 | 0:10:05 | smithi | main | centos | 8.stream | orch:cephadm/smoke-small/{0-distro/centos_8.stream_container_tools_crun 0-nvme-loop agent/on fixed-2 mon_election/connectivity start} | 3 | |
Failure Reason:
Command failed on smithi019 with status 1: 'sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid 6c54193a-900c-11ee-95a2-87774f69a715 --force' |
||||||||||||||
fail | 7472943 | 2023-11-30 21:17:59 | 2023-12-01 05:24:34 | 2023-12-01 05:47:38 | 0:23:04 | 0:13:23 | 0:09:41 | smithi | main | centos | 8.stream | orch:cephadm/thrash/{0-distro/centos_8.stream_container_tools 1-start 2-thrash 3-tasks/small-objects fixed-2 msgr/async-v2only root} | 2 | |
Failure Reason:
Command failed on smithi001 with status 1: 'sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid 059ce0fe-900d-11ee-95a2-87774f69a715 --force' |
||||||||||||||
fail | 7472944 | 2023-11-30 21:18:00 | 2023-12-01 05:24:54 | 2023-12-01 05:46:37 | 0:21:43 | 0:12:18 | 0:09:25 | smithi | main | centos | 8.stream | orch:cephadm/with-work/{0-distro/centos_8.stream_container_tools fixed-2 mode/root mon_election/connectivity msgr/async start tasks/rados_api_tests} | 2 | |
Failure Reason:
Command failed on smithi119 with status 1: 'sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid f26ee324-900c-11ee-95a2-87774f69a715 --force' |
||||||||||||||
fail | 7472945 | 2023-11-30 21:18:01 | 2023-12-01 05:25:25 | 2023-12-01 05:46:23 | 0:20:58 | 0:12:30 | 0:08:28 | smithi | main | centos | 8.stream | orch:cephadm/workunits/{0-distro/centos_8.stream_container_tools agent/off mon_election/classic task/test_orch_cli} | 1 | |
Failure Reason:
Command failed on smithi098 with status 1: 'sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid eda1b4d4-900c-11ee-95a2-87774f69a715 --force' |
||||||||||||||
fail | 7472946 | 2023-11-30 21:18:02 | 2023-12-01 05:25:25 | 2023-12-01 05:43:58 | 0:18:33 | 0:10:51 | 0:07:42 | smithi | main | rhel | 8.6 | orch:cephadm/smoke-roleless/{0-distro/rhel_8.6_container_tools_3.0 0-nvme-loop 1-start 2-services/nfs 3-final} | 2 | |
Failure Reason:
Command failed on smithi017 with status 1: 'sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid 7ab8eafa-900c-11ee-95a2-87774f69a715 --force' |
||||||||||||||
fail | 7472947 | 2023-11-30 21:18:03 | 2023-12-01 05:25:55 | 2023-12-01 05:59:54 | 0:33:59 | 0:23:10 | 0:10:49 | smithi | main | centos | 8.stream | orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} fail_fs/yes overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn syntax} roles tasks/{0-from/quincy 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-inline/yes 4-verify} 2-client 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} | 2 | |
Failure Reason:
Command failed on smithi146 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:quincy shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 3a9ae97c-900d-11ee-95a2-87774f69a715 -e sha1=875037c4c0585d32b171b61f9b83f482e6b73b3d -- bash -c \'ceph versions | jq -e \'"\'"\'.mgr | length == 1\'"\'"\'\'' |
||||||||||||||
fail | 7472948 | 2023-11-30 21:18:03 | 2023-12-01 05:26:46 | 2023-12-01 05:46:04 | 0:19:18 | 0:10:58 | 0:08:20 | smithi | main | rhel | 8.6 | orch:cephadm/smoke-roleless/{0-distro/rhel_8.6_container_tools_rhel8 0-nvme-loop 1-start 2-services/nfs2 3-final} | 2 | |
Failure Reason:
Command failed on smithi066 with status 1: 'sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid cb6b2008-900c-11ee-95a2-87774f69a715 --force' |
||||||||||||||
fail | 7472949 | 2023-11-30 21:18:04 | 2023-12-01 05:27:36 | 2023-12-01 05:53:22 | 0:25:46 | 0:13:26 | 0:12:20 | smithi | main | centos | 8.stream | orch:cephadm/workunits/{0-distro/centos_8.stream_container_tools_crun agent/on mon_election/connectivity task/test_orch_cli_mon} | 5 | |
Failure Reason:
Command failed on smithi053 with status 1: 'sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid c235396e-900d-11ee-95a2-87774f69a715 --force' |
||||||||||||||
fail | 7472950 | 2023-11-30 21:18:05 | 2023-12-01 05:27:47 | 2023-12-01 05:48:59 | 0:21:12 | 0:10:58 | 0:10:14 | smithi | main | centos | 8.stream | orch:cephadm/with-work/{0-distro/centos_8.stream_container_tools_crun fixed-2 mode/packaged mon_election/classic msgr/async-v1only start tasks/rados_python} | 2 | |
Failure Reason:
Command failed on smithi120 with status 1: 'sudo cephadm rm-cluster --fsid 2b579604-900d-11ee-95a2-87774f69a715 --force' |
||||||||||||||
fail | 7472951 | 2023-11-30 21:18:06 | 2023-12-01 05:27:57 | 2023-12-01 05:45:42 | 0:17:45 | 0:10:41 | 0:07:04 | smithi | main | rhel | 8.6 | orch:cephadm/osds/{0-distro/rhel_8.6_container_tools_rhel8 0-nvme-loop 1-start 2-ops/rmdir-reactivate} | 2 | |
Failure Reason:
Command failed on smithi039 with status 1: 'sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid b51f571a-900c-11ee-95a2-87774f69a715 --force' |
||||||||||||||
fail | 7472952 | 2023-11-30 21:18:07 | 2023-12-01 05:27:58 | 2023-12-01 05:44:07 | 0:16:09 | 0:06:28 | 0:09:41 | smithi | main | ubuntu | 20.04 | orch:cephadm/smoke/{0-distro/ubuntu_20.04 0-nvme-loop agent/on fixed-2 mon_election/connectivity start} | 2 | |
Failure Reason:
Command failed on smithi002 with status 1: 'sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid 952e1ad6-900c-11ee-95a2-87774f69a715 --force' |
||||||||||||||
fail | 7472953 | 2023-11-30 21:18:08 | 2023-12-01 05:28:08 | 2023-12-01 05:44:16 | 0:16:08 | 0:06:20 | 0:09:48 | smithi | main | ubuntu | 20.04 | orch:cephadm/smoke-roleless/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-services/nvmeof 3-final} | 2 | |
Failure Reason:
Command failed on smithi105 with status 1: 'sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid 946681ec-900c-11ee-95a2-87774f69a715 --force' |
||||||||||||||
fail | 7472954 | 2023-11-30 21:18:08 | 2023-12-01 05:28:08 | 2023-12-01 05:51:32 | 0:23:24 | 0:12:11 | 0:11:13 | smithi | main | centos | 8.stream | orch:cephadm/thrash/{0-distro/centos_8.stream_container_tools_crun 1-start 2-thrash 3-tasks/snaps-few-objects fixed-2 msgr/async root} | 2 | |
Failure Reason:
Command failed on smithi125 with status 1: 'sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid 94eff1f6-900d-11ee-95a2-87774f69a715 --force' |
||||||||||||||
fail | 7472955 | 2023-11-30 21:18:09 | 2023-12-01 05:28:19 | 2023-12-01 05:47:20 | 0:19:01 | 0:12:41 | 0:06:20 | smithi | main | rhel | 8.6 | orch:cephadm/workunits/{0-distro/rhel_8.6_container_tools_3.0 agent/off mon_election/classic task/test_rgw_multisite} | 3 | |
Failure Reason:
Command failed on smithi088 with status 1: 'sudo yum -y install ceph' |
||||||||||||||
fail | 7472956 | 2023-11-30 21:18:10 | 2023-12-01 05:28:19 | 2023-12-01 05:47:39 | 0:19:20 | 0:08:20 | 0:11:00 | smithi | main | centos | 8.stream | orch:cephadm/smoke-roleless/{0-distro/centos_8.stream_container_tools 0-nvme-loop 1-start 2-services/rgw-ingress 3-final} | 2 | |
Failure Reason:
Command failed on smithi139 with status 1: 'sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid f27e1d26-900c-11ee-95a2-87774f69a715 --force' |
||||||||||||||
fail | 7472957 | 2023-11-30 21:18:11 | 2023-12-01 05:28:30 | 2023-12-01 06:11:47 | 0:43:17 | 0:33:12 | 0:10:05 | smithi | main | centos | 8.stream | orch:cephadm/upgrade/{1-start-distro/1-start-centos_8.stream_container-tools 2-repo_digest/defaut 3-upgrade/staggered 4-wait 5-upgrade-ls agent/on mon_election/classic} | 2 | |
Failure Reason:
Command failed on smithi032 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 24564396-900d-11ee-95a2-87774f69a715 -e sha1=875037c4c0585d32b171b61f9b83f482e6b73b3d -- bash -c \'ceph versions | jq -e \'"\'"\'.mgr | length == 1\'"\'"\'\'' |
||||||||||||||
fail | 7472958 | 2023-11-30 21:18:12 | 2023-12-01 05:28:30 | 2023-12-01 05:48:23 | 0:19:53 | 0:12:51 | 0:07:02 | smithi | main | rhel | 8.6 | orch:cephadm/with-work/{0-distro/rhel_8.6_container_tools_3.0 fixed-2 mode/root mon_election/connectivity msgr/async-v2only start tasks/rotate-keys} | 2 | |
Failure Reason:
Command failed on smithi156 with status 1: 'sudo yum -y install ceph' |
||||||||||||||
fail | 7472959 | 2023-11-30 21:18:13 | 2023-12-01 05:28:40 | 2023-12-01 05:50:20 | 0:21:40 | 0:12:26 | 0:09:14 | smithi | main | rhel | 8.6 | orch:cephadm/workunits/{0-distro/rhel_8.6_container_tools_rhel8 agent/on mon_election/connectivity task/test_set_mon_crush_locations} | 3 | |
Failure Reason:
Command failed on smithi174 with status 1: 'sudo yum -y install ceph' |
||||||||||||||
fail | 7472960 | 2023-11-30 21:18:14 | 2023-12-01 05:31:01 | 2023-12-01 05:50:29 | 0:19:28 | 0:08:16 | 0:11:12 | smithi | main | centos | 8.stream | orch:cephadm/smoke-roleless/{0-distro/centos_8.stream_container_tools_crun 0-nvme-loop 1-start 2-services/rgw 3-final} | 2 | |
Failure Reason:
Command failed on smithi149 with status 1: 'sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid 4a88c2e6-900d-11ee-95a2-87774f69a715 --force' |
||||||||||||||
fail | 7472961 | 2023-11-30 21:18:14 | 2023-12-01 05:31:12 | 2023-12-01 06:02:21 | 0:31:09 | 0:22:00 | 0:09:09 | smithi | main | centos | 8.stream | orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} fail_fs/yes overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn syntax} roles tasks/{0-from/quincy 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-inline/no 4-verify} 2-client 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} | 2 | |
Failure Reason:
Command failed on smithi116 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:quincy shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid af3ae6e2-900d-11ee-95a2-87774f69a715 -e sha1=875037c4c0585d32b171b61f9b83f482e6b73b3d -- bash -c \'ceph versions | jq -e \'"\'"\'.mgr | length == 1\'"\'"\'\'' |
||||||||||||||
pass | 7472962 | 2023-11-30 21:18:15 | 2023-12-01 05:31:12 | 2023-12-01 05:57:10 | 0:25:58 | 0:13:51 | 0:12:07 | smithi | main | ubuntu | 20.04 | orch:cephadm/orchestrator_cli/{0-random-distro$/{ubuntu_20.04} 2-node-mgr agent/on orchestrator_cli} | 2 | |
fail | 7472963 | 2023-11-30 21:18:16 | 2023-12-01 05:32:42 | 2023-12-01 05:51:34 | 0:18:52 | 0:10:54 | 0:07:58 | smithi | main | rhel | 8.6 | orch:cephadm/osds/{0-distro/rhel_8.6_container_tools_rhel8 0-nvme-loop 1-start 2-ops/repave-all} | 2 | |
Failure Reason:
Command failed on smithi008 with status 1: 'sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid 8cf4311a-900d-11ee-95a2-87774f69a715 --force' |
||||||||||||||
fail | 7472964 | 2023-11-30 21:18:17 | 2023-12-01 05:32:53 | 2023-12-01 05:50:57 | 0:18:04 | 0:08:12 | 0:09:52 | smithi | main | centos | 8.stream | orch:cephadm/smoke/{0-distro/centos_8.stream_container_tools 0-nvme-loop agent/on fixed-2 mon_election/classic start} | 2 | |
Failure Reason:
Command failed on smithi096 with status 1: 'sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid 8e219af0-900d-11ee-95a2-87774f69a715 --force' |
||||||||||||||
fail | 7472965 | 2023-11-30 21:18:18 | 2023-12-01 05:33:53 | 2023-12-01 05:53:22 | 0:19:29 | 0:10:46 | 0:08:43 | smithi | main | rhel | 8.6 | orch:cephadm/smoke-roleless/{0-distro/rhel_8.6_container_tools_3.0 0-nvme-loop 1-start 2-services/basic 3-final} | 2 | |
Failure Reason:
Command failed on smithi081 with status 1: 'sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid c775ff12-900d-11ee-95a2-87774f69a715 --force' |
||||||||||||||
fail | 7472966 | 2023-11-30 21:18:19 | 2023-12-01 05:36:14 | 2023-12-01 05:53:35 | 0:17:21 | 0:10:37 | 0:06:44 | smithi | main | rhel | 8.6 | orch:cephadm/smoke-singlehost/{0-random-distro$/{rhel_8.6_container_tools_rhel8} 1-start 2-services/rgw 3-final} | 1 | |
Failure Reason:
Command failed on smithi057 with status 1: 'sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid cb95307c-900d-11ee-95a2-87774f69a715 --force' |
||||||||||||||
fail | 7472967 | 2023-11-30 21:18:20 | 2023-12-01 05:36:14 | 2023-12-01 05:57:41 | 0:21:27 | 0:09:08 | 0:12:19 | smithi | main | centos | 8.stream | orch:cephadm/smoke-small/{0-distro/centos_8.stream_container_tools_crun 0-nvme-loop agent/on fixed-2 mon_election/classic start} | 3 | |
Failure Reason:
Command failed on smithi027 with status 1: 'sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid 7a2f032e-900e-11ee-95a2-87774f69a715 --force' |
||||||||||||||
fail | 7472968 | 2023-11-30 21:18:20 | 2023-12-01 05:37:15 | 2023-12-01 05:56:27 | 0:19:12 | 0:12:31 | 0:06:41 | smithi | main | rhel | 8.6 | orch:cephadm/thrash/{0-distro/rhel_8.6_container_tools_3.0 1-start 2-thrash 3-tasks/rados_api_tests fixed-2 msgr/async-v1only root} | 2 | |
Failure Reason:
Command failed on smithi172 with status 1: 'sudo yum -y install ceph' |
||||||||||||||
fail | 7472969 | 2023-11-30 21:18:21 | 2023-12-01 05:37:15 | 2023-12-01 05:57:22 | 0:20:07 | 0:12:30 | 0:07:37 | smithi | main | rhel | 8.6 | orch:cephadm/with-work/{0-distro/rhel_8.6_container_tools_rhel8 fixed-2 mode/packaged mon_election/classic msgr/async start tasks/rados_api_tests} | 2 | |
Failure Reason:
Command failed on smithi153 with status 1: 'sudo yum -y install ceph' |
||||||||||||||
fail | 7472970 | 2023-11-30 21:18:22 | 2023-12-01 05:38:26 | 2023-12-01 05:58:52 | 0:20:26 | 0:09:30 | 0:10:56 | smithi | main | ubuntu | 20.04 | orch:cephadm/workunits/{0-distro/ubuntu_20.04 agent/off mon_election/classic task/test_adoption} | 1 | |
Failure Reason:
Command failed (workunit test cephadm/test_adoption.sh) on smithi190 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=875037c4c0585d32b171b61f9b83f482e6b73b3d TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_adoption.sh' |
||||||||||||||
fail | 7472971 | 2023-11-30 21:18:23 | 2023-12-01 05:38:26 | 2023-12-01 05:55:31 | 0:17:05 | 0:10:51 | 0:06:14 | smithi | main | rhel | 8.6 | orch:cephadm/smoke-roleless/{0-distro/rhel_8.6_container_tools_rhel8 0-nvme-loop 1-start 2-services/client-keyring 3-final} | 2 | |
Failure Reason:
Command failed on smithi102 with status 1: 'sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid 179d3f0a-900e-11ee-95a2-87774f69a715 --force' |
||||||||||||||
dead | 7472972 | 2023-11-30 21:18:24 | 2023-12-01 05:38:26 | 2023-12-01 05:39:46 | 0:01:20 | smithi | main | centos | 8.stream | orch:cephadm/workunits/{0-distro/centos_8.stream_container_tools agent/on mon_election/connectivity task/test_ca_signed_key} | 2 | |||
Failure Reason:
Error reimaging machines: Failed to power on smithi005 |
||||||||||||||
fail | 7472973 | 2023-11-30 21:18:25 | 2023-12-01 05:38:37 | 2023-12-01 05:59:10 | 0:20:33 | 0:08:45 | 0:11:48 | smithi | main | ubuntu | 20.04 | orch:cephadm/with-work/{0-distro/ubuntu_20.04 fixed-2 mode/root mon_election/connectivity msgr/async-v1only start tasks/rados_python} | 2 | |
Failure Reason:
Command failed on smithi040 with status 1: 'sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid b7ffe04c-900e-11ee-95a2-87774f69a715 --force' |
||||||||||||||
fail | 7472974 | 2023-11-30 21:18:26 | 2023-12-01 05:39:17 | 2023-12-01 05:55:43 | 0:16:26 | 0:06:14 | 0:10:12 | smithi | main | ubuntu | 20.04 | orch:cephadm/smoke-roleless/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-services/iscsi 3-final} | 2 | |
Failure Reason:
Command failed on smithi064 with status 1: 'sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid 270ab81e-900e-11ee-95a2-87774f69a715 --force' |
||||||||||||||
fail | 7472975 | 2023-11-30 21:18:26 | 2023-12-01 05:39:48 | 2023-12-01 05:58:53 | 0:19:05 | 0:12:02 | 0:07:03 | smithi | main | rhel | 8.6 | orch:cephadm/thrash/{0-distro/rhel_8.6_container_tools_rhel8 1-start 2-thrash 3-tasks/radosbench fixed-2 msgr/async-v2only root} | 2 | |
Failure Reason:
Command failed on smithi106 with status 1: 'sudo yum -y install ceph' |
||||||||||||||
fail | 7472976 | 2023-11-30 21:18:27 | 2023-12-01 05:39:48 | 2023-12-01 06:04:39 | 0:24:51 | 0:14:48 | 0:10:03 | smithi | main | centos | 8.stream | orch:cephadm/workunits/{0-distro/centos_8.stream_container_tools_crun agent/off mon_election/classic task/test_cephadm} | 1 | |
Failure Reason:
Command failed (workunit test cephadm/test_cephadm.sh) on smithi016 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=875037c4c0585d32b171b61f9b83f482e6b73b3d TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_cephadm.sh' |
||||||||||||||
fail | 7472977 | 2023-11-30 21:18:28 | 2023-12-01 05:39:58 | 2023-12-01 05:58:41 | 0:18:43 | 0:09:19 | 0:09:24 | smithi | main | centos | 8.stream | orch:cephadm/smoke-roleless/{0-distro/centos_8.stream_container_tools 0-nvme-loop 1-start 2-services/jaeger 3-final} | 2 | |
Failure Reason:
Command failed on smithi006 with status 1: 'sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid 91cc64f4-900e-11ee-95a2-87774f69a715 --force' |
||||||||||||||
fail | 7472978 | 2023-11-30 21:18:29 | 2023-12-01 05:40:09 | 2023-12-01 06:17:21 | 0:37:12 | 0:26:12 | 0:11:00 | smithi | main | ubuntu | 20.04 | orch:cephadm/upgrade/{1-start-distro/1-start-ubuntu_20.04 2-repo_digest/repo_digest 3-upgrade/simple 4-wait 5-upgrade-ls agent/off mon_election/connectivity} | 2 | |
Failure Reason:
Command failed on smithi070 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 5c8abc00-900e-11ee-95a2-87774f69a715 -e sha1=875037c4c0585d32b171b61f9b83f482e6b73b3d -- bash -c \'ceph versions | jq -e \'"\'"\'.overall | length == 1\'"\'"\'\'' |
||||||||||||||
fail | 7472979 | 2023-11-30 21:18:30 | 2023-12-01 05:41:29 | 2023-12-01 05:57:48 | 0:16:19 | 0:06:22 | 0:09:57 | smithi | main | ubuntu | 20.04 | orch:cephadm/osds/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-ops/rm-zap-add} | 2 | |
Failure Reason:
Command failed on smithi067 with status 1: 'sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid 76262802-900e-11ee-95a2-87774f69a715 --force' |
||||||||||||||
fail | 7472980 | 2023-11-30 21:18:31 | 2023-12-01 05:41:40 | 2023-12-01 06:02:15 | 0:20:35 | 0:08:37 | 0:11:58 | smithi | main | centos | 8.stream | orch:cephadm/smoke/{0-distro/centos_8.stream_container_tools_crun 0-nvme-loop agent/off fixed-2 mon_election/connectivity start} | 2 | |
Failure Reason:
Command failed on smithi023 with status 1: 'sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid f69daea6-900e-11ee-95a2-87774f69a715 --force' |
||||||||||||||
fail | 7472981 | 2023-11-30 21:18:32 | 2023-12-01 05:42:30 | 2023-12-01 06:06:03 | 0:23:33 | 0:13:18 | 0:10:15 | smithi | main | centos | 8.stream | orch:cephadm/with-work/{0-distro/centos_8.stream_container_tools fixed-2 mode/packaged mon_election/classic msgr/async-v2only start tasks/rotate-keys} | 2 | |
Failure Reason:
Command failed on smithi043 with status 1: 'sudo cephadm rm-cluster --fsid 9078af76-900f-11ee-95a2-87774f69a715 --force' |
||||||||||||||
fail | 7472982 | 2023-11-30 21:18:32 | 2023-12-01 05:43:11 | 2023-12-01 06:20:27 | 0:37:16 | 0:26:54 | 0:10:22 | smithi | main | centos | 8.stream | orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} fail_fs/no overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn syntax} roles tasks/{0-from/quincy 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-inline/yes 4-verify} 2-client 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} | 2 | |
Failure Reason:
Command failed on smithi019 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:quincy shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid b15e40d4-900f-11ee-95a2-87774f69a715 -e sha1=875037c4c0585d32b171b61f9b83f482e6b73b3d -- bash -c \'ceph versions | jq -e \'"\'"\'.overall | length == 1\'"\'"\'\'' |
||||||||||||||
fail | 7472983 | 2023-11-30 21:18:33 | 2023-12-01 05:43:41 | 2023-12-01 06:03:35 | 0:19:54 | 0:09:07 | 0:10:47 | smithi | main | centos | 8.stream | orch:cephadm/smoke-roleless/{0-distro/centos_8.stream_container_tools_crun 0-nvme-loop 1-start 2-services/mirror 3-final} | 2 | |
Failure Reason:
Command failed on smithi017 with status 1: 'sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid 3adc4b04-900f-11ee-95a2-87774f69a715 --force' |
||||||||||||||
fail | 7472984 | 2023-11-30 21:18:34 | 2023-12-01 05:44:02 | 2023-12-01 06:02:57 | 0:18:55 | 0:11:54 | 0:07:01 | smithi | main | rhel | 8.6 | orch:cephadm/workunits/{0-distro/rhel_8.6_container_tools_3.0 agent/on mon_election/connectivity task/test_cephadm_repos} | 1 | |
Failure Reason:
Command failed (workunit test cephadm/test_repos.sh) on smithi170 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=875037c4c0585d32b171b61f9b83f482e6b73b3d TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_repos.sh' |
||||||||||||||
fail | 7472985 | 2023-11-30 21:18:35 | 2023-12-01 05:44:02 | 2023-12-01 06:00:59 | 0:16:57 | 0:10:50 | 0:06:07 | smithi | main | rhel | 8.6 | orch:cephadm/smoke-roleless/{0-distro/rhel_8.6_container_tools_3.0 0-nvme-loop 1-start 2-services/nfs-haproxy-proto 3-final} | 2 | |
Failure Reason:
Command failed on smithi002 with status 1: 'sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid dba5ac0c-900e-11ee-95a2-87774f69a715 --force' |
||||||||||||||
fail | 7472986 | 2023-11-30 21:18:36 | 2023-12-01 05:44:12 | 2023-12-01 06:05:27 | 0:21:15 | 0:08:41 | 0:12:34 | smithi | main | centos | 8.stream | orch:cephadm/smoke-small/{0-distro/centos_8.stream_container_tools_crun 0-nvme-loop agent/off fixed-2 mon_election/connectivity start} | 3 | |
Failure Reason:
Command failed on smithi039 with status 1: 'sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid 80e6791c-900f-11ee-95a2-87774f69a715 --force' |
||||||||||||||
fail | 7472987 | 2023-11-30 21:18:37 | 2023-12-01 05:45:43 | 2023-12-01 06:07:18 | 0:21:35 | 0:08:35 | 0:13:00 | smithi | main | ubuntu | 20.04 | orch:cephadm/thrash/{0-distro/ubuntu_20.04 1-start 2-thrash 3-tasks/small-objects fixed-2 msgr/async root} | 2 | |
Failure Reason:
Command failed on smithi066 with status 1: 'sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid bb3b8a6c-900f-11ee-95a2-87774f69a715 --force' |
||||||||||||||
fail | 7472988 | 2023-11-30 21:18:38 | 2023-12-01 05:46:13 | 2023-12-01 06:09:32 | 0:23:19 | 0:12:36 | 0:10:43 | smithi | main | centos | 8.stream | orch:cephadm/with-work/{0-distro/centos_8.stream_container_tools_crun fixed-2 mode/root mon_election/connectivity msgr/async start tasks/rados_api_tests} | 2 | |
Failure Reason:
Command failed on smithi077 with status 1: 'sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid f7da0386-900f-11ee-95a2-87774f69a715 --force' |
||||||||||||||
fail | 7472989 | 2023-11-30 21:18:38 | 2023-12-01 05:46:14 | 2023-12-01 06:03:38 | 0:17:24 | 0:11:25 | 0:05:59 | smithi | main | rhel | 8.6 | orch:cephadm/workunits/{0-distro/rhel_8.6_container_tools_rhel8 agent/off mon_election/classic task/test_extra_daemon_features} | 2 | |
Failure Reason:
Command failed on smithi098 with status 1: 'sudo yum -y install ceph' |
||||||||||||||
fail | 7472990 | 2023-11-30 21:18:39 | 2023-12-01 05:46:24 | 2023-12-01 06:03:25 | 0:17:01 | 0:09:54 | 0:07:07 | smithi | main | rhel | 8.6 | orch:cephadm/smoke-roleless/{0-distro/rhel_8.6_container_tools_rhel8 0-nvme-loop 1-start 2-services/nfs-ingress-rgw-bucket 3-final} | 2 | |
Failure Reason:
Command failed on smithi091 with status 1: 'sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid 10f836f4-900f-11ee-95a2-87774f69a715 --force' |
||||||||||||||
fail | 7472991 | 2023-11-30 21:18:40 | 2023-12-01 05:46:45 | 2023-12-01 06:03:43 | 0:16:58 | 0:08:20 | 0:08:38 | smithi | main | centos | 8.stream | orch:cephadm/osds/{0-distro/centos_8.stream_container_tools 0-nvme-loop 1-start 2-ops/rm-zap-flag} | 2 | |
Failure Reason:
Command failed on smithi119 with status 1: 'sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid 5245036c-900f-11ee-95a2-87774f69a715 --force' |
||||||||||||||
fail | 7472992 | 2023-11-30 21:18:41 | 2023-12-01 05:46:45 | 2023-12-01 06:03:52 | 0:17:07 | 0:11:09 | 0:05:58 | smithi | main | rhel | 8.6 | orch:cephadm/smoke/{0-distro/rhel_8.6_container_tools_3.0 0-nvme-loop agent/on fixed-2 mon_election/classic start} | 2 | |
Failure Reason:
Command failed on smithi012 with status 1: 'sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid 4dd8dbc8-900f-11ee-95a2-87774f69a715 --force' |
||||||||||||||
fail | 7472993 | 2023-11-30 21:18:42 | 2023-12-01 05:47:16 | 2023-12-01 06:07:55 | 0:20:39 | 0:08:46 | 0:11:53 | smithi | main | ubuntu | 20.04 | orch:cephadm/workunits/{0-distro/ubuntu_20.04 agent/on mon_election/connectivity task/test_host_drain} | 3 | |
Failure Reason:
Command failed on smithi038 with status 1: 'sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid b48adc90-900f-11ee-95a2-87774f69a715 --force' |
||||||||||||||
fail | 7472994 | 2023-11-30 21:18:43 | 2023-12-01 05:47:26 | 2023-12-01 06:03:33 | 0:16:07 | 0:06:26 | 0:09:41 | smithi | main | ubuntu | 20.04 | orch:cephadm/smoke-roleless/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-services/nfs-ingress-rgw-user 3-final} | 2 | |
Failure Reason:
Command failed on smithi026 with status 1: 'sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid 40060d72-900f-11ee-95a2-87774f69a715 --force' |
||||||||||||||
fail | 7472995 | 2023-11-30 21:18:44 | 2023-12-01 05:47:46 | 2023-12-01 06:04:52 | 0:17:06 | 0:11:29 | 0:05:37 | smithi | main | rhel | 8.6 | orch:cephadm/with-work/{0-distro/rhel_8.6_container_tools_3.0 fixed-2 mode/packaged mon_election/classic msgr/async-v1only start tasks/rados_python} | 2 | |
Failure Reason:
Command failed on smithi177 with status 1: 'sudo yum -y install ceph-radosgw ceph-test ceph ceph-base cephadm ceph-immutable-object-cache ceph-mgr ceph-mgr-dashboard ceph-mgr-diskprediction-local ceph-mgr-rook ceph-mgr-cephadm ceph-fuse ceph-volume librados-devel libcephfs2 libcephfs-devel librados2 librbd1 python3-rados python3-rgw python3-cephfs python3-rbd rbd-fuse rbd-mirror rbd-nbd python3-pytest python3-pytest python3-pytest python3-pytest' |
||||||||||||||
fail | 7472996 | 2023-11-30 21:18:45 | 2023-12-01 05:48:17 | 2023-12-01 06:23:50 | 0:35:33 | 0:26:02 | 0:09:31 | smithi | main | centos | 8.stream | orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} fail_fs/yes overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn syntax} roles tasks/{0-from/quincy 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-inline/no 4-verify} 2-client 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} | 2 | |
Failure Reason:
Command failed on smithi029 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:quincy shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 55f62cce-9010-11ee-95a2-87774f69a715 -e sha1=875037c4c0585d32b171b61f9b83f482e6b73b3d -- bash -c \'ceph versions | jq -e \'"\'"\'.mgr | length == 1\'"\'"\'\'' |
||||||||||||||
fail | 7472997 | 2023-11-30 21:18:45 | 2023-12-01 05:48:27 | 2023-12-01 06:07:58 | 0:19:31 | 0:08:26 | 0:11:05 | smithi | main | centos | 8.stream | orch:cephadm/smoke-roleless/{0-distro/centos_8.stream_container_tools 0-nvme-loop 1-start 2-services/nfs-ingress 3-final} | 2 | |
Failure Reason:
Command failed on smithi120 with status 1: 'sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid a7cd1fe0-900f-11ee-95a2-87774f69a715 --force' |
||||||||||||||
fail | 7472998 | 2023-11-30 21:18:46 | 2023-12-01 05:49:08 | 2023-12-01 06:14:07 | 0:24:59 | 0:12:18 | 0:12:41 | smithi | main | centos | 8.stream | orch:cephadm/thrash/{0-distro/centos_8.stream_container_tools 1-start 2-thrash 3-tasks/snaps-few-objects fixed-2 msgr/async-v1only root} | 2 | |
Failure Reason:
Command failed on smithi121 with status 1: 'sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid 8a4eff14-9010-11ee-95a2-87774f69a715 --force' |
||||||||||||||
fail | 7472999 | 2023-11-30 21:18:47 | 2023-12-01 05:50:28 | 2023-12-01 06:11:41 | 0:21:13 | 0:09:33 | 0:11:40 | smithi | main | centos | 8.stream | orch:cephadm/upgrade/{1-start-distro/1-start-centos_8.stream_container-tools 2-repo_digest/repo_digest 3-upgrade/simple 4-wait 5-upgrade-ls agent/off mon_election/classic} | 2 | |
Failure Reason:
Command failed on smithi149 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 pull' |
||||||||||||||
fail | 7473000 | 2023-11-30 21:18:48 | 2023-12-01 05:50:39 | 2023-12-01 06:14:03 | 0:23:24 | 0:12:13 | 0:11:11 | smithi | main | centos | 8.stream | orch:cephadm/workunits/{0-distro/centos_8.stream_container_tools agent/off mon_election/classic task/test_iscsi_container/{centos_8.stream_container_tools test_iscsi_container}} | 1 | |
Failure Reason:
Command failed on smithi178 with status 1: 'sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid 8896a5a0-9010-11ee-95a2-87774f69a715 --force' |
||||||||||||||
fail | 7473001 | 2023-11-30 21:18:49 | 2023-12-01 05:50:39 | 2023-12-01 06:07:59 | 0:17:20 | 0:11:28 | 0:05:52 | smithi | main | rhel | 8.6 | orch:cephadm/with-work/{0-distro/rhel_8.6_container_tools_rhel8 fixed-2 mode/root mon_election/connectivity msgr/async-v2only start tasks/rotate-keys} | 2 | |
Failure Reason:
Command failed on smithi096 with status 1: 'sudo yum -y install ceph' |
||||||||||||||
fail | 7473002 | 2023-11-30 21:18:50 | 2023-12-01 05:50:59 | 2023-12-01 06:10:48 | 0:19:49 | 0:08:25 | 0:11:24 | smithi | main | centos | 8.stream | orch:cephadm/smoke-roleless/{0-distro/centos_8.stream_container_tools_crun 0-nvme-loop 1-start 2-services/nfs-ingress2 3-final} | 2 | |
Failure Reason:
Command failed on smithi125 with status 1: 'sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid 2497df10-9010-11ee-95a2-87774f69a715 --force' |
||||||||||||||
fail | 7473003 | 2023-11-30 21:18:51 | 2023-12-01 05:51:40 | 2023-12-01 06:20:52 | 0:29:12 | 0:18:23 | 0:10:49 | smithi | main | centos | 8.stream | orch:cephadm/workunits/{0-distro/centos_8.stream_container_tools_crun agent/on mon_election/connectivity task/test_monitoring_stack_basic} | 3 | |
Failure Reason:
Command failed on smithi028 with status 1: 'sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid 7533497c-9011-11ee-95a2-87774f69a715 --force' |