User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Pass | Fail | Dead |
---|---|---|---|---|---|---|---|---|---|---|---|
gabrioux | 2023-01-19 06:14:52 | 2023-01-19 06:15:52 | 2023-01-19 19:42:21 | 13:26:29 | orch:cephadm | wip-guits-testing-2023-01-18-2355 | smithi | 8c2873f | 35 | 54 | 8 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
fail | 7129886 | 2023-01-19 06:15:01 | 2023-01-19 06:15:49 | 2023-01-19 06:44:42 | 0:28:53 | 0:18:35 | 0:10:18 | smithi | main | centos | 8.stream | orch:cephadm/osds/{0-distro/centos_8.stream_container_tools 0-nvme-loop 1-start 2-ops/rm-zap-add} | 2 | |
Failure Reason:
reached maximum tries (120) after waiting for 120 seconds |
||||||||||||||
fail | 7129887 | 2023-01-19 06:15:02 | 2023-01-19 06:15:49 | 2023-01-19 06:28:15 | 0:12:26 | 0:06:28 | 0:05:58 | smithi | main | centos | 8.stream | orch:cephadm/workunits/{0-distro/rhel_8.6_container_tools_3.0 agent/on mon_election/connectivity task/test_iscsi_pids_limit/{centos_8.stream_container_tools test_iscsi_pids_limit}} | 1 | |
Failure Reason:
Command failed on smithi032 with status 1: 'TESTDIR=/home/ubuntu/cephtest bash -s' |
||||||||||||||
pass | 7129888 | 2023-01-19 06:15:03 | 2023-01-19 06:15:50 | 2023-01-19 06:55:22 | 0:39:32 | 0:27:52 | 0:11:40 | smithi | main | rhel | 8.6 | orch:cephadm/with-work/{0-distro/rhel_8.6_container_tools_3.0 fixed-2 mode/packaged mon_election/classic msgr/async-v1only start tasks/rados_python} | 2 | |
fail | 7129889 | 2023-01-19 06:15:04 | 2023-01-19 06:15:50 | 2023-01-19 06:42:42 | 0:26:52 | 0:18:32 | 0:08:20 | smithi | main | centos | 8.stream | orch:cephadm/smoke-roleless/{0-distro/centos_8.stream_container_tools_crun 0-nvme-loop 1-start 2-services/jaeger 3-final} | 2 | |
Failure Reason:
reached maximum tries (120) after waiting for 120 seconds |
||||||||||||||
fail | 7129890 | 2023-01-19 06:15:05 | 2023-01-19 06:15:50 | 2023-01-19 06:36:32 | 0:20:42 | 0:13:28 | 0:07:14 | smithi | main | centos | 8.stream | orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} fail_fs/no overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn syntax} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-inline/no 4-verify} 2-client 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} | 2 | |
Failure Reason:
Command failed on smithi019 with status 2: 'sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v16.2.4 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f6944bda-97c2-11ed-9e55-001a4aab830c -- ceph-volume lvm zap /dev/vg_nvme/lv_4 --force' |
||||||||||||||
fail | 7129891 | 2023-01-19 06:15:06 | 2023-01-19 06:15:51 | 2023-01-19 06:36:57 | 0:21:06 | 0:10:51 | 0:10:15 | smithi | main | centos | 8.stream | orch:cephadm/mgr-nfs-upgrade/{0-centos_8.stream_container_tools 1-bootstrap/16.2.0 1-start 2-nfs 3-upgrade-with-workload 4-final} | 2 | |
Failure Reason:
Command failed on smithi027 with status 2: 'sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v16.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 2cfef59e-97c3-11ed-9e55-001a4aab830c -- ceph-volume lvm zap /dev/vg_nvme/lv_4 --force' |
||||||||||||||
pass | 7129892 | 2023-01-19 06:15:07 | 2023-01-19 06:15:51 | 2023-01-19 06:40:03 | 0:24:12 | 0:17:30 | 0:06:42 | smithi | main | rhel | 8.6 | orch:cephadm/orchestrator_cli/{0-random-distro$/{rhel_8.6_container_tools_rhel8} 2-node-mgr agent/off orchestrator_cli} | 2 | |
pass | 7129893 | 2023-01-19 06:15:07 | 2023-01-19 06:15:51 | 2023-01-19 07:03:38 | 0:47:47 | 0:37:32 | 0:10:15 | smithi | main | centos | 8.stream | orch:cephadm/rbd_iscsi/{0-single-container-host base/install cluster/{fixed-3 openstack} workloads/cephadm_iscsi} | 3 | |
pass | 7129894 | 2023-01-19 06:15:08 | 2023-01-19 06:15:51 | 2023-01-19 06:41:52 | 0:26:01 | 0:13:54 | 0:12:07 | smithi | main | centos | 8.stream | orch:cephadm/smoke-singlehost/{0-random-distro$/{centos_8.stream_container_tools} 1-start 2-services/basic 3-final} | 1 | |
fail | 7129895 | 2023-01-19 06:15:09 | 2023-01-19 06:15:52 | 2023-01-19 06:36:02 | 0:20:10 | 0:09:33 | 0:10:37 | smithi | main | centos | 8.stream | orch:cephadm/upgrade/{1-start-distro/1-start-centos_8.stream_container-tools 2-repo_digest/defaut 3-upgrade/staggered 4-wait 5-upgrade-ls agent/off mon_election/classic} | 2 | |
Failure Reason:
Command failed on smithi149 with status 2: 'sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v16.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 016d533a-97c3-11ed-9e55-001a4aab830c -- ceph-volume lvm zap /dev/vg_nvme/lv_4 --force' |
||||||||||||||
pass | 7129896 | 2023-01-19 06:15:10 | 2023-01-19 06:15:52 | 2023-01-19 06:54:05 | 0:38:13 | 0:28:55 | 0:09:18 | smithi | main | rhel | 8.6 | orch:cephadm/with-work/{0-distro/rhel_8.6_container_tools_rhel8 fixed-2 mode/root mon_election/connectivity msgr/async-v2only start tasks/rotate-keys} | 2 | |
fail | 7129897 | 2023-01-19 06:15:11 | 2023-01-19 06:15:52 | 2023-01-19 06:48:00 | 0:32:08 | 0:25:53 | 0:06:15 | smithi | main | rhel | 8.6 | orch:cephadm/workunits/{0-distro/rhel_8.6_container_tools_rhel8 agent/off mon_election/classic task/test_nfs} | 1 | |
Failure Reason:
Test failure: test_non_existent_cluster (tasks.cephfs.test_nfs.TestNFS) |
||||||||||||||
fail | 7129898 | 2023-01-19 06:15:12 | 2023-01-19 06:15:53 | 2023-01-19 06:44:54 | 0:29:01 | 0:18:41 | 0:10:20 | smithi | main | rhel | 8.6 | orch:cephadm/smoke-roleless/{0-distro/rhel_8.6_container_tools_3.0 0-nvme-loop 1-start 2-services/mirror 3-final} | 2 | |
Failure Reason:
reached maximum tries (120) after waiting for 120 seconds |
||||||||||||||
fail | 7129899 | 2023-01-19 06:15:13 | 2023-01-19 06:15:53 | 2023-01-19 06:35:48 | 0:19:55 | 0:11:03 | 0:08:52 | smithi | main | rhel | 8.6 | orch:cephadm/smoke/{0-distro/rhel_8.6_container_tools_rhel8 0-nvme-loop agent/off fixed-2 mon_election/classic start} | 2 | |
Failure Reason:
Command failed on smithi078 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:8c2873fed10920a9f11c8fd09e87e77c92bfa3c8 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid fa623f2e-97c2-11ed-9e55-001a4aab830c -- ceph orch daemon add osd smithi078:/dev/nvme4n1' |
||||||||||||||
pass | 7129900 | 2023-01-19 06:15:14 | 2023-01-19 06:15:53 | 2023-01-19 07:01:12 | 0:45:19 | 0:38:00 | 0:07:19 | smithi | main | rhel | 8.6 | orch:cephadm/thrash/{0-distro/rhel_8.6_container_tools_rhel8 1-start 2-thrash 3-tasks/rados_api_tests fixed-2 msgr/async root} | 2 | |
pass | 7129901 | 2023-01-19 06:15:14 | 2023-01-19 06:15:53 | 2023-01-19 07:11:42 | 0:55:49 | 0:42:55 | 0:12:54 | smithi | main | ubuntu | 20.04 | orch:cephadm/with-work/{0-distro/ubuntu_20.04 fixed-2 mode/packaged mon_election/classic msgr/async start tasks/rados_api_tests} | 2 | |
pass | 7129902 | 2023-01-19 06:15:15 | 2023-01-19 06:15:54 | 2023-01-19 06:45:39 | 0:29:45 | 0:18:58 | 0:10:47 | smithi | main | ubuntu | 20.04 | orch:cephadm/workunits/{0-distro/ubuntu_20.04 agent/on mon_election/connectivity task/test_orch_cli} | 1 | |
fail | 7129903 | 2023-01-19 06:15:16 | 2023-01-19 06:15:54 | 2023-01-19 06:45:32 | 0:29:38 | 0:18:01 | 0:11:37 | smithi | main | rhel | 8.6 | orch:cephadm/smoke-roleless/{0-distro/rhel_8.6_container_tools_rhel8 0-nvme-loop 1-start 2-services/nfs-ingress-rgw-bucket 3-final} | 2 | |
Failure Reason:
reached maximum tries (120) after waiting for 120 seconds |
||||||||||||||
pass | 7129904 | 2023-01-19 06:15:17 | 2023-01-19 06:15:54 | 2023-01-19 06:52:48 | 0:36:54 | 0:27:19 | 0:09:35 | smithi | main | centos | 8.stream | orch:cephadm/with-work/{0-distro/centos_8.stream_container_tools fixed-2 mode/root mon_election/connectivity msgr/async-v1only start tasks/rados_python} | 2 | |
fail | 7129905 | 2023-01-19 06:15:18 | 2023-01-19 06:15:54 | 2023-01-19 06:45:23 | 0:29:29 | 0:18:36 | 0:10:53 | smithi | main | centos | 8.stream | orch:cephadm/osds/{0-distro/centos_8.stream_container_tools_crun 0-nvme-loop 1-start 2-ops/rm-zap-flag} | 2 | |
Failure Reason:
reached maximum tries (120) after waiting for 120 seconds |
||||||||||||||
fail | 7129906 | 2023-01-19 06:15:19 | 2023-01-19 06:15:55 | 2023-01-19 06:40:39 | 0:24:44 | 0:15:03 | 0:09:41 | smithi | main | centos | 8.stream | orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} fail_fs/yes overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn syntax} roles tasks/{0-from/pacific 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-inline/yes 4-verify} 2-client 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} | 2 | |
Failure Reason:
Command failed on smithi002 with status 2: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:pacific shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 8a2bd444-97c3-11ed-9e55-001a4aab830c -- ceph-volume lvm zap /dev/vg_nvme/lv_4 --force' |
||||||||||||||
pass | 7129907 | 2023-01-19 06:15:20 | 2023-01-19 06:15:55 | 2023-01-19 06:52:25 | 0:36:30 | 0:26:50 | 0:09:40 | smithi | main | centos | 8.stream | orch:cephadm/workunits/{0-distro/centos_8.stream_container_tools agent/off mon_election/classic task/test_orch_cli_mon} | 5 | |
pass | 7129908 | 2023-01-19 06:15:21 | 2023-01-19 06:15:55 | 2023-01-19 06:54:11 | 0:38:16 | 0:27:09 | 0:11:07 | smithi | main | centos | 8.stream | orch:cephadm/with-work/{0-distro/centos_8.stream_container_tools_crun fixed-2 mode/packaged mon_election/classic msgr/async-v2only start tasks/rotate-keys} | 2 | |
fail | 7129909 | 2023-01-19 06:15:22 | 2023-01-19 06:15:55 | 2023-01-19 07:02:25 | 0:46:30 | 0:31:11 | 0:15:19 | smithi | main | ubuntu | 20.04 | orch:cephadm/smoke-roleless/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-services/nfs-ingress-rgw-user 3-final} | 2 | |
Failure Reason:
reached maximum tries (120) after waiting for 120 seconds |
||||||||||||||
fail | 7129910 | 2023-01-19 06:15:23 | 2023-01-19 06:15:56 | 2023-01-19 06:41:04 | 0:25:08 | 0:09:15 | 0:15:53 | smithi | main | ubuntu | 20.04 | orch:cephadm/smoke/{0-distro/ubuntu_20.04 0-nvme-loop agent/on fixed-2 mon_election/connectivity start} | 2 | |
Failure Reason:
Command failed on smithi039 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:8c2873fed10920a9f11c8fd09e87e77c92bfa3c8 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 710ecab6-97c3-11ed-9e55-001a4aab830c -- ceph orch daemon add osd smithi039:/dev/nvme4n1' |
||||||||||||||
pass | 7129911 | 2023-01-19 06:15:23 | 2023-01-19 06:15:56 | 2023-01-19 07:32:21 | 1:16:25 | 1:02:55 | 0:13:30 | smithi | main | ubuntu | 20.04 | orch:cephadm/thrash/{0-distro/ubuntu_20.04 1-start 2-thrash 3-tasks/radosbench fixed-2 msgr/async-v1only root} | 2 | |
pass | 7129912 | 2023-01-19 06:15:24 | 2023-01-19 06:15:56 | 2023-01-19 07:12:13 | 0:56:17 | 0:39:11 | 0:17:06 | smithi | main | rhel | 8.6 | orch:cephadm/with-work/{0-distro/rhel_8.6_container_tools_3.0 fixed-2 mode/root mon_election/connectivity msgr/async start tasks/rados_api_tests} | 2 | |
pass | 7129913 | 2023-01-19 06:15:25 | 2023-01-19 06:26:58 | 2023-01-19 06:45:24 | 0:18:26 | 0:12:55 | 0:05:31 | smithi | main | centos | 8.stream | orch:cephadm/workunits/{0-distro/centos_8.stream_container_tools_crun agent/on mon_election/connectivity task/test_adoption} | 1 | |
fail | 7129914 | 2023-01-19 06:15:26 | 2023-01-19 06:26:58 | 2023-01-19 06:59:46 | 0:32:48 | 0:18:04 | 0:14:44 | smithi | main | centos | 8.stream | orch:cephadm/smoke-roleless/{0-distro/centos_8.stream_container_tools 0-nvme-loop 1-start 2-services/nfs-ingress 3-final} | 2 | |
Failure Reason:
reached maximum tries (120) after waiting for 120 seconds |
||||||||||||||
pass | 7129915 | 2023-01-19 06:15:27 | 2023-01-19 06:34:30 | 2023-01-19 07:08:43 | 0:34:13 | 0:26:39 | 0:07:34 | smithi | main | rhel | 8.6 | orch:cephadm/with-work/{0-distro/rhel_8.6_container_tools_rhel8 fixed-2 mode/packaged mon_election/classic msgr/async-v1only start tasks/rados_python} | 2 | |
fail | 7129916 | 2023-01-19 06:15:28 | 2023-01-19 06:35:50 | 2023-01-19 06:55:49 | 0:19:59 | 0:13:09 | 0:06:50 | smithi | main | centos | 8.stream | orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} fail_fs/no overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn syntax} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-inline/no 4-verify} 2-client 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} | 2 | |
Failure Reason:
Command failed on smithi149 with status 2: 'sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v16.2.4 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid dd4a7c1e-97c5-11ed-9e55-001a4aab830c -- ceph-volume lvm zap /dev/vg_nvme/lv_4 --force' |
||||||||||||||
fail | 7129917 | 2023-01-19 06:15:29 | 2023-01-19 06:36:10 | 2023-01-19 06:54:09 | 0:17:59 | 0:07:27 | 0:10:32 | smithi | main | ubuntu | 20.04 | orch:cephadm/upgrade/{1-start-distro/1-start-ubuntu_20.04 2-repo_digest/repo_digest 3-upgrade/simple 4-wait 5-upgrade-ls agent/on mon_election/connectivity} | 2 | |
Failure Reason:
Command failed on smithi019 with status 2: 'sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v16.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 7c87cc74-97c5-11ed-9e55-001a4aab830c -- ceph-volume lvm zap /dev/vg_nvme/lv_4 --force' |
||||||||||||||
pass | 7129918 | 2023-01-19 06:15:30 | 2023-01-19 06:36:41 | 2023-01-19 07:02:34 | 0:25:53 | 0:20:04 | 0:05:49 | smithi | main | rhel | 8.6 | orch:cephadm/workunits/{0-distro/rhel_8.6_container_tools_3.0 agent/off mon_election/classic task/test_cephadm} | 1 | |
fail | 7129919 | 2023-01-19 06:15:30 | 2023-01-19 06:36:41 | 2023-01-19 07:02:59 | 0:26:18 | 0:19:18 | 0:07:00 | smithi | main | rhel | 8.6 | orch:cephadm/osds/{0-distro/rhel_8.6_container_tools_3.0 0-nvme-loop 1-start 2-ops/rm-zap-wait} | 2 | |
Failure Reason:
reached maximum tries (120) after waiting for 120 seconds |
||||||||||||||
pass | 7129920 | 2023-01-19 06:15:31 | 2023-01-19 06:37:01 | 2023-01-19 07:29:10 | 0:52:09 | 0:37:42 | 0:14:27 | smithi | main | ubuntu | 20.04 | orch:cephadm/with-work/{0-distro/ubuntu_20.04 fixed-2 mode/root mon_election/connectivity msgr/async-v2only start tasks/rotate-keys} | 2 | |
fail | 7129921 | 2023-01-19 06:15:32 | 2023-01-19 06:40:12 | 2023-01-19 07:05:57 | 0:25:45 | 0:18:04 | 0:07:41 | smithi | main | centos | 8.stream | orch:cephadm/smoke-roleless/{0-distro/centos_8.stream_container_tools_crun 0-nvme-loop 1-start 2-services/nfs-ingress2 3-final} | 2 | |
Failure Reason:
reached maximum tries (120) after waiting for 120 seconds |
||||||||||||||
pass | 7129922 | 2023-01-19 06:15:33 | 2023-01-19 06:40:42 | 2023-01-19 06:55:39 | 0:14:57 | 0:09:39 | 0:05:18 | smithi | main | rhel | 8.6 | orch:cephadm/workunits/{0-distro/rhel_8.6_container_tools_rhel8 agent/on mon_election/connectivity task/test_cephadm_repos} | 1 | |
fail | 7129923 | 2023-01-19 06:15:34 | 2023-01-19 06:41:13 | 2023-01-19 06:59:21 | 0:18:08 | 0:10:20 | 0:07:48 | smithi | main | centos | 8.stream | orch:cephadm/mgr-nfs-upgrade/{0-centos_8.stream_container_tools 1-bootstrap/16.2.4 1-start 2-nfs 3-upgrade-with-workload 4-final} | 2 | |
Failure Reason:
Command failed on smithi039 with status 2: 'sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v16.2.4 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 461708b6-97c6-11ed-9e55-001a4aab830c -- ceph-volume lvm zap /dev/vg_nvme/lv_4 --force' |
||||||||||||||
fail | 7129924 | 2023-01-19 06:15:35 | 2023-01-19 06:41:53 | 2023-01-19 06:59:44 | 0:17:51 | 0:10:30 | 0:07:21 | smithi | main | centos | 8.stream | orch:cephadm/smoke/{0-distro/centos_8.stream_container_tools 0-nvme-loop agent/on fixed-2 mon_election/classic start} | 2 | |
Failure Reason:
Command failed on smithi061 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:8c2873fed10920a9f11c8fd09e87e77c92bfa3c8 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 5410487e-97c6-11ed-9e55-001a4aab830c -- ceph orch daemon add osd smithi061:/dev/nvme4n1' |
||||||||||||||
pass | 7129925 | 2023-01-19 06:15:36 | 2023-01-19 06:42:43 | 2023-01-19 07:23:47 | 0:41:04 | 0:32:53 | 0:08:11 | smithi | main | centos | 8.stream | orch:cephadm/thrash/{0-distro/centos_8.stream_container_tools 1-start 2-thrash 3-tasks/small-objects fixed-2 msgr/async-v2only root} | 2 | |
pass | 7129926 | 2023-01-19 06:15:37 | 2023-01-19 06:44:44 | 2023-01-19 07:30:41 | 0:45:57 | 0:37:51 | 0:08:06 | smithi | main | centos | 8.stream | orch:cephadm/with-work/{0-distro/centos_8.stream_container_tools fixed-2 mode/packaged mon_election/classic msgr/async-v2only start tasks/rados_api_tests} | 2 | |
fail | 7129927 | 2023-01-19 06:15:38 | 2023-01-19 06:45:04 | 2023-01-19 07:09:31 | 0:24:27 | 0:18:37 | 0:05:50 | smithi | main | rhel | 8.6 | orch:cephadm/smoke-roleless/{0-distro/rhel_8.6_container_tools_3.0 0-nvme-loop 1-start 2-services/nfs 3-final} | 2 | |
Failure Reason:
reached maximum tries (120) after waiting for 120 seconds |
||||||||||||||
pass | 7129928 | 2023-01-19 06:15:38 | 2023-01-19 06:45:25 | 2023-01-19 07:07:53 | 0:22:28 | 0:16:46 | 0:05:42 | smithi | main | centos | 8.stream | orch:cephadm/workunits/{0-distro/ubuntu_20.04 agent/off mon_election/classic task/test_iscsi_pids_limit/{centos_8.stream_container_tools test_iscsi_pids_limit}} | 1 | |
fail | 7129929 | 2023-01-19 06:15:39 | 2023-01-19 06:45:25 | 2023-01-19 07:06:32 | 0:21:07 | 0:13:33 | 0:07:34 | smithi | main | centos | 8.stream | orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} fail_fs/yes overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn syntax} roles tasks/{0-from/pacific 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-inline/yes 4-verify} 2-client 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} | 2 | |
Failure Reason:
Command failed on smithi131 with status 2: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:pacific shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 1ae1ac72-97c7-11ed-9e55-001a4aab830c -- ceph-volume lvm zap /dev/vg_nvme/lv_4 --force' |
||||||||||||||
pass | 7129930 | 2023-01-19 06:15:40 | 2023-01-19 06:45:35 | 2023-01-19 07:20:34 | 0:34:59 | 0:25:56 | 0:09:03 | smithi | main | centos | 8.stream | orch:cephadm/with-work/{0-distro/centos_8.stream_container_tools_crun fixed-2 mode/root mon_election/connectivity msgr/async start tasks/rados_python} | 2 | |
fail | 7129931 | 2023-01-19 06:15:41 | 2023-01-19 06:48:06 | 2023-01-19 07:17:27 | 0:29:21 | 0:17:54 | 0:11:27 | smithi | main | rhel | 8.6 | orch:cephadm/smoke-roleless/{0-distro/rhel_8.6_container_tools_rhel8 0-nvme-loop 1-start 2-services/nfs2 3-final} | 2 | |
Failure Reason:
reached maximum tries (120) after waiting for 120 seconds |
||||||||||||||
pass | 7129932 | 2023-01-19 06:15:42 | 2023-01-19 06:52:27 | 2023-01-19 07:24:57 | 0:32:30 | 0:27:41 | 0:04:49 | smithi | main | rhel | 8.6 | orch:cephadm/with-work/{0-distro/rhel_8.6_container_tools_3.0 fixed-2 mode/packaged mon_election/classic msgr/async-v1only start tasks/rotate-keys} | 2 | |
fail | 7129933 | 2023-01-19 06:15:43 | 2023-01-19 06:52:27 | 2023-01-19 07:23:32 | 0:31:05 | 0:24:17 | 0:06:48 | smithi | main | centos | 8.stream | orch:cephadm/workunits/{0-distro/centos_8.stream_container_tools agent/on mon_election/connectivity task/test_nfs} | 1 | |
Failure Reason:
Test failure: test_non_existent_cluster (tasks.cephfs.test_nfs.TestNFS) |
||||||||||||||
fail | 7129934 | 2023-01-19 06:15:44 | 2023-01-19 06:52:27 | 2023-01-19 07:17:56 | 0:25:29 | 0:18:38 | 0:06:51 | smithi | main | rhel | 8.6 | orch:cephadm/osds/{0-distro/rhel_8.6_container_tools_rhel8 0-nvme-loop 1-start 2-ops/rmdir-reactivate} | 2 | |
Failure Reason:
reached maximum tries (120) after waiting for 120 seconds |
||||||||||||||
fail | 7129935 | 2023-01-19 06:15:44 | 2023-01-19 06:52:58 | 2023-01-19 07:11:12 | 0:18:14 | 0:10:34 | 0:07:40 | smithi | main | centos | 8.stream | orch:cephadm/smoke/{0-distro/centos_8.stream_container_tools_crun 0-nvme-loop agent/off fixed-2 mon_election/connectivity start} | 2 | |
Failure Reason:
Command failed on smithi107 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:8c2873fed10920a9f11c8fd09e87e77c92bfa3c8 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid e9e66fee-97c7-11ed-9e55-001a4aab830c -- ceph orch daemon add osd smithi107:/dev/nvme4n1' |
||||||||||||||
pass | 7129936 | 2023-01-19 06:15:45 | 2023-01-19 06:54:08 | 2023-01-19 07:40:40 | 0:46:32 | 0:38:23 | 0:08:09 | smithi | main | centos | 8.stream | orch:cephadm/thrash/{0-distro/centos_8.stream_container_tools_crun 1-start 2-thrash 3-tasks/snaps-few-objects fixed-2 msgr/async root} | 2 | |
pass | 7129937 | 2023-01-19 06:15:46 | 2023-01-19 06:54:18 | 2023-01-19 07:38:49 | 0:44:31 | 0:39:48 | 0:04:43 | smithi | main | rhel | 8.6 | orch:cephadm/with-work/{0-distro/rhel_8.6_container_tools_rhel8 fixed-2 mode/root mon_election/connectivity msgr/async-v2only start tasks/rados_api_tests} | 2 | |
fail | 7129938 | 2023-01-19 06:15:47 | 2023-01-19 06:54:19 | 2023-01-19 07:37:55 | 0:43:36 | 0:29:59 | 0:13:37 | smithi | main | ubuntu | 20.04 | orch:cephadm/smoke-roleless/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-services/rgw-ingress 3-final} | 2 | |
Failure Reason:
reached maximum tries (120) after waiting for 120 seconds |
||||||||||||||
pass | 7129939 | 2023-01-19 06:15:48 | 2023-01-19 06:55:29 | 2023-01-19 07:18:53 | 0:23:24 | 0:16:48 | 0:06:36 | smithi | main | centos | 8.stream | orch:cephadm/workunits/{0-distro/centos_8.stream_container_tools_crun agent/off mon_election/classic task/test_orch_cli} | 1 | |
fail | 7129940 | 2023-01-19 06:15:49 | 2023-01-19 06:55:39 | 2023-01-19 07:17:30 | 0:21:51 | 0:13:19 | 0:08:32 | smithi | main | centos | 8.stream | orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} fail_fs/no overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn syntax} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-inline/no 4-verify} 2-client 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} | 2 | |
Failure Reason:
Command failed on smithi149 with status 2: 'sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v16.2.4 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid a1a6e532-97c8-11ed-9e55-001a4aab830c -- ceph-volume lvm zap /dev/vg_nvme/lv_4 --force' |
||||||||||||||
pass | 7129941 | 2023-01-19 06:15:50 | 2023-01-19 07:22:53 | 952 | smithi | main | centos | 8.stream | orch:cephadm/orchestrator_cli/{0-random-distro$/{centos_8.stream_container_tools} 2-node-mgr agent/on orchestrator_cli} | 2 | ||||
pass | 7129942 | 2023-01-19 06:15:50 | 2023-01-19 06:59:30 | 2023-01-19 07:30:01 | 0:30:31 | 0:19:03 | 0:11:28 | smithi | main | ubuntu | 20.04 | orch:cephadm/smoke-singlehost/{0-random-distro$/{ubuntu_20.04} 1-start 2-services/rgw 3-final} | 1 | |
fail | 7129943 | 2023-01-19 06:15:51 | 2023-01-19 06:59:51 | 2023-01-19 07:16:56 | 0:17:05 | 0:09:27 | 0:07:38 | smithi | main | centos | 8.stream | orch:cephadm/upgrade/{1-start-distro/1-start-centos_8.stream_container-tools 2-repo_digest/defaut 3-upgrade/staggered 4-wait 5-upgrade-ls agent/on mon_election/classic} | 2 | |
Failure Reason:
Command failed on smithi100 with status 2: 'sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v16.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid b52194d6-97c8-11ed-9e55-001a4aab830c -- ceph-volume lvm zap /dev/vg_nvme/lv_4 --force' |
||||||||||||||
pass | 7129944 | 2023-01-19 06:15:52 | 2023-01-19 06:59:51 | 2023-01-19 07:41:19 | 0:41:28 | 0:30:31 | 0:10:57 | smithi | main | ubuntu | 20.04 | orch:cephadm/with-work/{0-distro/ubuntu_20.04 fixed-2 mode/packaged mon_election/classic msgr/async start tasks/rados_python} | 2 | |
fail | 7129945 | 2023-01-19 06:15:53 | 2023-01-19 07:01:21 | 2023-01-19 07:27:19 | 0:25:58 | 0:18:15 | 0:07:43 | smithi | main | centos | 8.stream | orch:cephadm/smoke-roleless/{0-distro/centos_8.stream_container_tools 0-nvme-loop 1-start 2-services/rgw 3-final} | 2 | |
Failure Reason:
reached maximum tries (120) after waiting for 120 seconds |
||||||||||||||
pass | 7129946 | 2023-01-19 06:15:54 | 2023-01-19 07:02:32 | 2023-01-19 07:39:11 | 0:36:39 | 0:28:24 | 0:08:15 | smithi | main | rhel | 8.6 | orch:cephadm/workunits/{0-distro/rhel_8.6_container_tools_3.0 agent/on mon_election/connectivity task/test_orch_cli_mon} | 5 | |
pass | 7129947 | 2023-01-19 06:15:55 | 2023-01-19 07:36:41 | 1634 | smithi | main | centos | 8.stream | orch:cephadm/with-work/{0-distro/centos_8.stream_container_tools fixed-2 mode/root mon_election/connectivity msgr/async-v1only start tasks/rotate-keys} | 2 | ||||
fail | 7129948 | 2023-01-19 06:15:56 | 2023-01-19 07:03:43 | 2023-01-19 07:30:44 | 0:27:01 | 0:18:12 | 0:08:49 | smithi | main | rhel | 8.6 | orch:cephadm/osds/{0-distro/rhel_8.6_container_tools_rhel8 0-nvme-loop 1-start 2-ops/repave-all} | 2 | |
Failure Reason:
reached maximum tries (120) after waiting for 120 seconds |
||||||||||||||
fail | 7129949 | 2023-01-19 06:15:57 | 2023-01-19 07:06:03 | 2023-01-19 07:23:26 | 0:17:23 | 0:10:53 | 0:06:30 | smithi | main | rhel | 8.6 | orch:cephadm/smoke/{0-distro/rhel_8.6_container_tools_3.0 0-nvme-loop agent/on fixed-2 mon_election/classic start} | 2 | |
Failure Reason:
Command failed on smithi131 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:8c2873fed10920a9f11c8fd09e87e77c92bfa3c8 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 9327c62e-97c9-11ed-9e55-001a4aab830c -- ceph orch daemon add osd smithi131:/dev/nvme4n1' |
||||||||||||||
fail | 7129950 | 2023-01-19 06:15:58 | 2023-01-19 07:06:34 | 2023-01-19 07:33:58 | 0:27:24 | 0:17:37 | 0:09:47 | smithi | main | centos | 8.stream | orch:cephadm/smoke-roleless/{0-distro/centos_8.stream_container_tools_crun 0-nvme-loop 1-start 2-services/basic 3-final} | 2 | |
Failure Reason:
reached maximum tries (120) after waiting for 120 seconds |
||||||||||||||
fail | 7129951 | 2023-01-19 06:15:59 | 2023-01-19 07:08:44 | 2023-01-19 07:49:34 | 0:40:50 | 0:34:18 | 0:06:32 | smithi | main | rhel | 8.6 | orch:cephadm/thrash/{0-distro/rhel_8.6_container_tools_3.0 1-start 2-thrash 3-tasks/rados_api_tests fixed-2 msgr/async-v1only root} | 2 | |
fail | 7129952 | 2023-01-19 06:16:00 | 2023-01-19 07:09:35 | 2023-01-19 07:48:12 | 0:38:37 | 0:31:21 | 0:07:16 | smithi | main | centos | 8.stream | orch:cephadm/with-work/{0-distro/centos_8.stream_container_tools_crun fixed-2 mode/packaged mon_election/classic msgr/async-v2only start tasks/rados_api_tests} | 2 | |
Failure Reason:
Cannot connect to remote host smithi107 |
||||||||||||||
pass | 7129953 | 2023-01-19 06:16:00 | 2023-01-19 07:11:15 | 2023-01-19 07:31:15 | 0:20:00 | 0:14:41 | 0:05:19 | smithi | main | rhel | 8.6 | orch:cephadm/workunits/{0-distro/rhel_8.6_container_tools_rhel8 agent/off mon_election/classic task/test_adoption} | 1 | |
fail | 7129954 | 2023-01-19 06:16:01 | 2023-01-19 07:11:15 | 2023-01-19 07:30:50 | 0:19:35 | 0:13:18 | 0:06:17 | smithi | main | centos | 8.stream | orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} fail_fs/yes overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn syntax} roles tasks/{0-from/pacific 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-inline/yes 4-verify} 2-client 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} | 2 | |
Failure Reason:
Command failed on smithi077 with status 2: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:pacific shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid c4dae98e-97ca-11ed-9e55-001a4aab830c -- ceph-volume lvm zap /dev/vg_nvme/lv_4 --force' |
||||||||||||||
pass | 7129955 | 2023-01-19 06:16:02 | 2023-01-19 07:11:46 | 2023-01-19 07:45:31 | 0:33:45 | 0:26:58 | 0:06:47 | smithi | main | rhel | 8.6 | orch:cephadm/with-work/{0-distro/rhel_8.6_container_tools_3.0 fixed-2 mode/root mon_election/connectivity msgr/async start tasks/rados_python} | 2 | |
pass | 7129956 | 2023-01-19 06:16:03 | 2023-01-19 07:12:16 | 2023-01-19 07:45:04 | 0:32:48 | 0:18:03 | 0:14:45 | smithi | main | ubuntu | 20.04 | orch:cephadm/workunits/{0-distro/ubuntu_20.04 agent/on mon_election/connectivity task/test_cephadm} | 1 | |
fail | 7129957 | 2023-01-19 06:16:04 | 2023-01-19 07:17:07 | 2023-01-19 07:43:16 | 0:26:09 | 0:18:17 | 0:07:52 | smithi | main | rhel | 8.6 | orch:cephadm/smoke-roleless/{0-distro/rhel_8.6_container_tools_3.0 0-nvme-loop 1-start 2-services/client-keyring 3-final} | 2 | |
Failure Reason:
reached maximum tries (120) after waiting for 120 seconds |
||||||||||||||
fail | 7129958 | 2023-01-19 06:16:05 | 2023-01-19 07:17:37 | 2023-01-19 07:34:29 | 0:16:52 | 0:10:08 | 0:06:44 | smithi | main | centos | 8.stream | orch:cephadm/mgr-nfs-upgrade/{0-centos_8.stream_container_tools 1-bootstrap/16.2.5 1-start 2-nfs 3-upgrade-with-workload 4-final} | 2 | |
Failure Reason:
Command failed on smithi138 with status 2: 'sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v16.2.5 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 26b1b52a-97cb-11ed-9e55-001a4aab830c -- ceph-volume lvm zap /dev/vg_nvme/lv_4 --force' |
||||||||||||||
fail | 7129959 | 2023-01-19 06:16:06 | 2023-01-19 07:17:38 | 2023-01-19 07:48:16 | 0:30:38 | 0:25:35 | 0:05:03 | smithi | main | rhel | 8.6 | orch:cephadm/with-work/{0-distro/rhel_8.6_container_tools_rhel8 fixed-2 mode/packaged mon_election/classic msgr/async-v1only start tasks/rotate-keys} | 2 | |
Failure Reason:
SSH connection to smithi008 was lost: 'sudo cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:8c2873fed10920a9f11c8fd09e87e77c92bfa3c8 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid c1d8e91a-97cb-11ed-9e55-001a4aab830c -- bash -c \'set -ex\nfor f in osd.0 osd.1 osd.2 osd.3 osd.4 osd.5 osd.6 osd.7 mgr.y mgr.x\ndo\n echo "rotating key for $f"\n K=$(ceph auth get-key $f)\n NK="$K"\n ceph orch daemon rotate-key $f\n while [ "$K" == "$NK" ]; do\n sleep 5\n NK=$(ceph auth get-key $f)\n done\ndone\n\'' |
||||||||||||||
pass | 7129960 | 2023-01-19 06:16:07 | 2023-01-19 07:17:58 | 2023-01-19 07:32:53 | 0:14:55 | 0:08:56 | 0:05:59 | smithi | main | centos | 8.stream | orch:cephadm/workunits/{0-distro/centos_8.stream_container_tools agent/off mon_election/classic task/test_cephadm_repos} | 1 | |
fail | 7129961 | 2023-01-19 06:16:07 | 2023-01-19 07:17:58 | 2023-01-19 07:44:46 | 0:26:48 | 0:18:33 | 0:08:15 | smithi | main | rhel | 8.6 | orch:cephadm/smoke-roleless/{0-distro/rhel_8.6_container_tools_rhel8 0-nvme-loop 1-start 2-services/iscsi 3-final} | 2 | |
Failure Reason:
reached maximum tries (120) after waiting for 120 seconds |
||||||||||||||
fail | 7129962 | 2023-01-19 06:16:08 | 2023-01-19 07:20:39 | 2023-01-19 07:40:26 | 0:19:47 | 0:10:44 | 0:09:03 | smithi | main | rhel | 8.6 | orch:cephadm/smoke/{0-distro/rhel_8.6_container_tools_rhel8 0-nvme-loop agent/off fixed-2 mon_election/connectivity start} | 2 | |
Failure Reason:
Command failed on smithi039 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:8c2873fed10920a9f11c8fd09e87e77c92bfa3c8 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 04915f94-97cc-11ed-9e55-001a4aab830c -- ceph orch daemon add osd smithi039:/dev/nvme4n1' |
||||||||||||||
dead | 7129963 | 2023-01-19 06:16:09 | 2023-01-19 07:23:00 | 2023-01-19 07:50:27 | 0:27:27 | smithi | main | rhel | 8.6 | orch:cephadm/thrash/{0-distro/rhel_8.6_container_tools_rhel8 1-start 2-thrash 3-tasks/radosbench fixed-2 msgr/async-v2only root} | 2 | |||
fail | 7129964 | 2023-01-19 06:16:10 | 2023-01-19 07:23:30 | 2023-01-19 07:49:36 | 0:26:06 | 0:15:08 | 0:10:58 | smithi | main | ubuntu | 20.04 | orch:cephadm/with-work/{0-distro/ubuntu_20.04 fixed-2 mode/root mon_election/connectivity msgr/async-v2only start tasks/rados_api_tests} | 2 | |
Failure Reason:
Cannot connect to remote host smithi080 |
||||||||||||||
fail | 7129965 | 2023-01-19 06:16:11 | 2023-01-19 07:23:40 | 2023-01-19 07:43:04 | 0:19:24 | 0:12:52 | 0:06:32 | smithi | main | centos | 8.stream | orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} fail_fs/no overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn syntax} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-inline/no 4-verify} 2-client 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} | 2 | |
Failure Reason:
Command failed on smithi085 with status 2: 'sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v16.2.4 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 7500b978-97cc-11ed-9e55-001a4aab830c -- ceph-volume lvm zap /dev/vg_nvme/lv_4 --force' |
||||||||||||||
fail | 7129966 | 2023-01-19 06:16:12 | 2023-01-19 07:23:51 | 2023-01-19 07:42:46 | 0:18:55 | 0:07:26 | 0:11:29 | smithi | main | ubuntu | 20.04 | orch:cephadm/upgrade/{1-start-distro/1-start-ubuntu_20.04 2-repo_digest/repo_digest 3-upgrade/simple 4-wait 5-upgrade-ls agent/off mon_election/connectivity} | 2 | |
Failure Reason:
Command failed on smithi123 with status 2: 'sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v16.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 4b74ec5a-97cc-11ed-9e55-001a4aab830c -- ceph-volume lvm zap /dev/vg_nvme/lv_4 --force' |
||||||||||||||
fail | 7129967 | 2023-01-19 06:16:13 | 2023-01-19 07:25:01 | 2023-01-19 07:49:06 | 0:24:05 | 0:11:40 | 0:12:25 | smithi | main | ubuntu | 20.04 | orch:cephadm/osds/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-ops/rm-zap-add} | 2 | |
Failure Reason:
Cannot connect to remote host smithi017 |
||||||||||||||
fail | 7129968 | 2023-01-19 06:16:13 | 2023-01-19 07:27:22 | 2023-01-19 07:49:58 | 0:22:36 | 0:13:32 | 0:09:04 | smithi | main | centos | 8.stream | orch:cephadm/workunits/{0-distro/centos_8.stream_container_tools_crun agent/on mon_election/connectivity task/test_iscsi_pids_limit/{centos_8.stream_container_tools test_iscsi_pids_limit}} | 1 | |
Failure Reason:
Cannot connect to remote host smithi057 |
||||||||||||||
fail | 7129969 | 2023-01-19 06:16:14 | 2023-01-19 07:29:12 | 2023-01-19 07:49:43 | 0:20:31 | 0:11:47 | 0:08:44 | smithi | main | centos | 8.stream | orch:cephadm/with-work/{0-distro/centos_8.stream_container_tools fixed-2 mode/packaged mon_election/classic msgr/async start tasks/rados_python} | 2 | |
Failure Reason:
Cannot connect to remote host smithi023 |
||||||||||||||
fail | 7129970 | 2023-01-19 06:16:15 | 2023-01-19 07:30:02 | 2023-01-19 07:49:20 | 0:19:18 | 0:07:11 | 0:12:07 | smithi | main | ubuntu | 20.04 | orch:cephadm/smoke-roleless/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-services/jaeger 3-final} | 2 | |
Failure Reason:
Cannot connect to remote host smithi117 |
||||||||||||||
fail | 7129971 | 2023-01-19 06:16:16 | 2023-01-19 07:30:43 | 2023-01-19 07:49:44 | 0:19:01 | 0:11:47 | 0:07:14 | smithi | main | centos | 8.stream | orch:cephadm/with-work/{0-distro/centos_8.stream_container_tools_crun fixed-2 mode/root mon_election/connectivity msgr/async-v1only start tasks/rotate-keys} | 2 | |
Failure Reason:
Cannot connect to remote host smithi077 |
||||||||||||||
fail | 7129972 | 2023-01-19 06:16:17 | 2023-01-19 07:30:53 | 2023-01-19 07:50:03 | 0:19:10 | 0:11:49 | 0:07:21 | smithi | main | rhel | 8.6 | orch:cephadm/workunits/{0-distro/rhel_8.6_container_tools_3.0 agent/off mon_election/classic task/test_nfs} | 1 | |
Failure Reason:
Cannot connect to remote host smithi148 |
||||||||||||||
fail | 7129973 | 2023-01-19 06:16:18 | 2023-01-19 07:30:53 | 2023-01-19 07:50:08 | 0:19:15 | 0:11:21 | 0:07:54 | smithi | main | centos | 8.stream | orch:cephadm/smoke-roleless/{0-distro/centos_8.stream_container_tools 0-nvme-loop 1-start 2-services/mirror 3-final} | 2 | |
Failure Reason:
Cannot connect to remote host smithi002 |
||||||||||||||
fail | 7129974 | 2023-01-19 06:16:18 | 2023-01-19 07:31:24 | 2023-01-19 08:02:21 | 0:30:57 | 0:19:44 | 0:11:13 | smithi | main | ubuntu | 20.04 | orch:cephadm/smoke/{0-distro/ubuntu_20.04 0-nvme-loop agent/on fixed-2 mon_election/classic start} | 2 | |
Failure Reason:
Cannot connect to remote host smithi110 |
||||||||||||||
fail | 7129975 | 2023-01-19 06:16:19 | 2023-01-19 07:32:24 | 2023-01-19 07:50:05 | 0:17:41 | 0:04:41 | 0:13:00 | smithi | main | ubuntu | 20.04 | orch:cephadm/thrash/{0-distro/ubuntu_20.04 1-start 2-thrash 3-tasks/small-objects fixed-2 msgr/async root} | 2 | |
Failure Reason:
SSH connection to smithi078 was lost: 'sudo systemctl stop ntp.service || sudo systemctl stop ntpd.service || sudo systemctl stop chronyd.service ; sudo ntpd -gq || sudo chronyc makestep ; sudo systemctl start ntp.service || sudo systemctl start ntpd.service || sudo systemctl start chronyd.service ; PATH=/usr/bin:/usr/sbin ntpq -p || PATH=/usr/bin:/usr/sbin chronyc sources || true' |
||||||||||||||
dead | 7129976 | 2023-01-19 06:16:20 | 2023-01-19 07:34:05 | 2023-01-19 19:42:21 | 12:08:16 | smithi | main | rhel | 8.6 | orch:cephadm/with-work/{0-distro/rhel_8.6_container_tools_3.0 fixed-2 mode/packaged mon_election/classic msgr/async-v2only start tasks/rados_api_tests} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
dead | 7129977 | 2023-01-19 06:16:21 | 2023-01-19 07:34:35 | 2023-01-19 07:51:59 | 0:17:24 | smithi | main | centos | 8.stream | orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} fail_fs/yes overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn syntax} roles tasks/{0-from/pacific 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-inline/yes 4-verify} 2-client 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} | 2 | |||
dead | 7129978 | 2023-01-19 06:16:22 | 2023-01-19 07:36:46 | 2023-01-19 07:53:54 | 0:17:08 | smithi | main | rhel | 8.6 | orch:cephadm/workunits/{0-distro/rhel_8.6_container_tools_rhel8 agent/on mon_election/connectivity task/test_orch_cli} | 1 | |||
dead | 7129979 | 2023-01-19 06:16:23 | 2023-01-19 07:36:46 | 2023-01-19 08:11:26 | 0:34:40 | 0:26:47 | 0:07:53 | smithi | main | centos | 8.stream | orch:cephadm/smoke-roleless/{0-distro/centos_8.stream_container_tools_crun 0-nvme-loop 1-start 2-services/nfs-ingress-rgw-bucket 3-final} | 2 | |
Failure Reason:
{'smithi178.front.sepia.ceph.com': {'changed': False, 'msg': 'Failed to connect to the host via ssh: ssh: connect to host smithi178.front.sepia.ceph.com port 22: No route to host', 'unreachable': True}} |
||||||||||||||
dead | 7129980 | 2023-01-19 06:16:24 | 2023-01-19 07:37:56 | 2023-01-19 07:51:02 | 0:13:06 | smithi | main | rhel | 8.6 | orch:cephadm/with-work/{0-distro/rhel_8.6_container_tools_rhel8 fixed-2 mode/root mon_election/connectivity msgr/async start tasks/rados_python} | 2 | |||
dead | 7129981 | 2023-01-19 06:16:24 | 2023-01-19 07:38:57 | 2023-01-19 07:50:25 | 0:11:28 | smithi | main | centos | 8.stream | orch:cephadm/osds/{0-distro/centos_8.stream_container_tools 0-nvme-loop 1-start 2-ops/rm-zap-flag} | 2 | |||
dead | 7129982 | 2023-01-19 06:16:25 | 2023-01-19 07:39:17 | 2023-01-19 08:01:40 | 0:22:23 | smithi | main | ubuntu | 20.04 | orch:cephadm/workunits/{0-distro/ubuntu_20.04 agent/off mon_election/classic task/test_orch_cli_mon} | 5 | |||
Failure Reason:
SSH connection to smithi006 was lost: 'sudo DEBIAN_FRONTEND=noninteractive apt-get -y install linux-image-generic-hwe-20.04' |