Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
pass 6451801 2021-10-20 09:33:57 2021-10-20 09:34:43 2021-10-20 10:20:38 0:45:55 0:32:10 0:13:45 smithi master centos 8.3 orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.3_container_tools_3.0 conf/{client mds mon osd} overrides/{pg-warn whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
dead 6451802 2021-10-20 09:33:58 2021-10-20 09:34:43 2021-10-20 21:48:51 12:14:08 smithi master centos 8.2 orch:cephadm/mgr-nfs-upgrade/{0-centos_8.2_container_tools_3.0 1-bootstrap/16.2.4 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
Failure Reason:

hit max job timeout

pass 6451803 2021-10-20 09:33:59 2021-10-20 09:34:43 2021-10-20 09:59:13 0:24:30 0:13:24 0:11:06 smithi master ubuntu 20.04 orch:cephadm/orchestrator_cli/{0-random-distro$/{ubuntu_20.04} 2-node-mgr orchestrator_cli} 2
fail 6451804 2021-10-20 09:33:59 2021-10-20 09:34:43 2021-10-20 09:55:57 0:21:14 0:11:28 0:09:46 smithi master centos 8.2 orch:cephadm/smoke/{0-nvme-loop distro/centos_8.2_container_tools_3.0 fixed-2 mon_election/classic start} 2
Failure Reason:

Command failed on smithi081 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:c2bb1d4b9d4d3d2fda9fac76bf742e446c7ca42e shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 3feb7906-318b-11ec-8c28-001a4aab830c -- ceph orch daemon add osd smithi081:/dev/nvme4n1'

fail 6451805 2021-10-20 09:34:00 2021-10-20 09:34:44 2021-10-20 10:03:43 0:28:59 0:18:50 0:10:09 smithi master centos 8.2 orch:cephadm/smoke-roleless/{0-distro/centos_8.2_container_tools_3.0 0-nvme-loop 1-start 2-services/basic 3-final} 2
Failure Reason:

reached maximum tries (120) after waiting for 120 seconds

fail 6451806 2021-10-20 09:34:01 2021-10-20 09:34:44 2021-10-20 09:52:23 0:17:39 0:08:37 0:09:02 smithi master ubuntu 20.04 orch:cephadm/smoke-singlehost/{0-distro$/{ubuntu_20.04} 1-start 2-services/basic 3-final} 1
Failure Reason:

Command failed on smithi071 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:c2bb1d4b9d4d3d2fda9fac76bf742e446c7ca42e shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid aad95626-318a-11ec-8c28-001a4aab830c -- ceph orch daemon add osd smithi071:vg_nvme/lv_4'

fail 6451807 2021-10-20 09:34:01 2021-10-20 09:34:44 2021-10-20 09:59:45 0:25:01 0:14:41 0:10:20 smithi master centos 8.2 orch:cephadm/thrash/{0-distro/centos_8.2_container_tools_3.0 1-start 2-thrash 3-tasks/rados_api_tests fixed-2 msgr/async-v1only root} 2
Failure Reason:

Command failed on smithi041 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:c2bb1d4b9d4d3d2fda9fac76bf742e446c7ca42e shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 8c1c6b1e-318b-11ec-8c28-001a4aab830c -- ceph orch daemon add osd smithi041:vg_nvme/lv_4'

pass 6451808 2021-10-20 09:34:02 2021-10-20 09:34:44 2021-10-20 10:19:43 0:44:59 0:31:10 0:13:49 smithi master centos 8.3 orch:cephadm/upgrade/{1-start-distro/1-start-centos_8.3-octopus 2-repo_digest/defaut 3-start-upgrade 4-wait 5-upgrade-ls mon_election/classic} 2
fail 6451809 2021-10-20 09:34:03 2021-10-20 09:34:46 2021-10-20 10:01:23 0:26:37 0:14:25 0:12:12 smithi master centos 8.2 orch:cephadm/with-work/{0-distro/centos_8.2_container_tools_3.0 fixed-2 mode/packaged mon_election/classic msgr/async-v1only start tasks/rados_api_tests} 2
Failure Reason:

Command failed on smithi133 with status 22: 'sudo cephadm --image quay.ceph.io/ceph-ci/ceph:c2bb1d4b9d4d3d2fda9fac76bf742e446c7ca42e shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d4964ce8-318b-11ec-8c28-001a4aab830c -- ceph orch daemon add osd smithi133:vg_nvme/lv_4'

pass 6451810 2021-10-20 09:34:04 2021-10-20 09:34:46 2021-10-20 10:01:26 0:26:40 0:13:43 0:12:57 smithi master centos 8.2 orch:cephadm/workunits/{0-distro/centos_8.2_container_tools_3.0 mon_election/classic task/test_adoption} 1
fail 6451811 2021-10-20 09:34:04 2021-10-20 09:34:46 2021-10-20 10:05:42 0:30:56 0:16:07 0:14:49 smithi master centos 8.3 orch:cephadm/with-work/{0-distro/centos_8.3_container_tools_3.0 fixed-2 mode/root mon_election/connectivity msgr/async-v2only start tasks/rados_python} 2
Failure Reason:

Command failed on smithi101 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:c2bb1d4b9d4d3d2fda9fac76bf742e446c7ca42e shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 6f2887da-318c-11ec-8c28-001a4aab830c -- ceph orch daemon add osd smithi101:vg_nvme/lv_4'

fail 6451812 2021-10-20 09:34:05 2021-10-20 09:34:47 2021-10-20 10:11:21 0:36:34 0:25:40 0:10:54 smithi master rhel 8.4 orch:cephadm/with-work/{0-distro/rhel_8.4_container_tools_3.0 fixed-2 mode/packaged mon_election/classic msgr/async start tasks/rados_api_tests} 2
Failure Reason:

Command failed on smithi049 with status 22: 'sudo cephadm --image quay.ceph.io/ceph-ci/ceph:c2bb1d4b9d4d3d2fda9fac76bf742e446c7ca42e shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 4a26f042-318d-11ec-8c28-001a4aab830c -- ceph orch daemon add osd smithi049:vg_nvme/lv_4'

fail 6451813 2021-10-20 09:34:06 2021-10-20 09:34:47 2021-10-20 10:07:43 0:32:56 0:20:26 0:12:30 smithi master centos 8.3 orch:cephadm/smoke-roleless/{0-distro/centos_8.3_container_tools_3.0 0-nvme-loop 1-start 2-services/client-keyring 3-final} 2
Failure Reason:

reached maximum tries (120) after waiting for 120 seconds

fail 6451814 2021-10-20 09:34:06 2021-10-20 09:34:48 2021-10-20 10:09:48 0:35:00 0:25:15 0:09:45 smithi master rhel 8.4 orch:cephadm/with-work/{0-distro/rhel_8.4_container_tools_rhel8 fixed-2 mode/root mon_election/connectivity msgr/async-v1only start tasks/rados_python} 2
Failure Reason:

Command failed on smithi129 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:c2bb1d4b9d4d3d2fda9fac76bf742e446c7ca42e shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 4c89d782-318d-11ec-8c28-001a4aab830c -- ceph orch daemon add osd smithi129:vg_nvme/lv_4'

fail 6451815 2021-10-20 09:34:07 2021-10-20 09:34:49 2021-10-20 09:59:55 0:25:06 0:13:03 0:12:03 smithi master centos 8.3 orch:cephadm/smoke/{0-nvme-loop distro/centos_8.3_container_tools_3.0 fixed-2 mon_election/connectivity start} 2
Failure Reason:

Command failed on smithi029 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:c2bb1d4b9d4d3d2fda9fac76bf742e446c7ca42e shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid a53ff002-318b-11ec-8c28-001a4aab830c -- ceph orch daemon add osd smithi029:/dev/nvme4n1'

fail 6451816 2021-10-20 09:34:08 2021-10-20 09:34:49 2021-10-20 10:01:07 0:26:18 0:12:39 0:13:39 smithi master ubuntu 20.04 orch:cephadm/with-work/{0-distro/ubuntu_20.04 fixed-2 mode/packaged mon_election/classic msgr/async-v2only start tasks/rados_api_tests} 2
Failure Reason:

Command failed on smithi013 with status 22: 'sudo cephadm --image quay.ceph.io/ceph-ci/ceph:c2bb1d4b9d4d3d2fda9fac76bf742e446c7ca42e shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid e3bda090-318b-11ec-8c28-001a4aab830c -- ceph orch daemon add osd smithi013:vg_nvme/lv_4'

pass 6451817 2021-10-20 09:34:09 2021-10-20 09:34:52 2021-10-20 10:01:23 0:26:31 0:17:42 0:08:49 smithi master centos 8.2 orch:cephadm/workunits/{0-distro/centos_8.2_container_tools_3.0 mon_election/connectivity task/test_cephadm} 1
fail 6451818 2021-10-20 09:34:09 2021-10-20 09:34:52 2021-10-20 10:06:02 0:31:10 0:22:10 0:09:00 smithi master rhel 8.4 orch:cephadm/smoke-roleless/{0-distro/rhel_8.4_container_tools_3.0 0-nvme-loop 1-start 2-services/iscsi 3-final} 2
Failure Reason:

reached maximum tries (120) after waiting for 120 seconds

fail 6451819 2021-10-20 09:34:10 2021-10-20 09:34:52 2021-10-20 09:57:31 0:22:39 0:14:23 0:08:16 smithi master centos 8.2 orch:cephadm/thrash/{0-distro/centos_8.2_container_tools_3.0 1-start 2-thrash 3-tasks/radosbench fixed-2 msgr/async-v2only root} 2
Failure Reason:

Command failed on smithi059 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:c2bb1d4b9d4d3d2fda9fac76bf742e446c7ca42e shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 8490a7f2-318b-11ec-8c28-001a4aab830c -- ceph orch daemon add osd smithi059:vg_nvme/lv_4'

fail 6451820 2021-10-20 09:34:11 2021-10-20 09:34:53 2021-10-20 10:06:10 0:31:17 0:16:41 0:14:36 smithi master centos 8.2 orch:cephadm/with-work/{0-distro/centos_8.2_container_tools_3.0 fixed-2 mode/root mon_election/connectivity msgr/async start tasks/rados_python} 2
Failure Reason:

Command failed on smithi106 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:c2bb1d4b9d4d3d2fda9fac76bf742e446c7ca42e shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 768ca3bc-318c-11ec-8c28-001a4aab830c -- ceph orch daemon add osd smithi106:vg_nvme/lv_4'

fail 6451821 2021-10-20 09:34:12 2021-10-20 09:34:53 2021-10-20 10:06:13 0:31:20 0:15:57 0:15:23 smithi master centos 8.3 orch:cephadm/with-work/{0-distro/centos_8.3_container_tools_3.0 fixed-2 mode/packaged mon_election/classic msgr/async-v1only start tasks/rados_api_tests} 2
Failure Reason:

Command failed on smithi073 with status 22: 'sudo cephadm --image quay.ceph.io/ceph-ci/ceph:c2bb1d4b9d4d3d2fda9fac76bf742e446c7ca42e shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 7c03ef80-318c-11ec-8c28-001a4aab830c -- ceph orch daemon add osd smithi073:vg_nvme/lv_4'

fail 6451822 2021-10-20 09:34:12 2021-10-20 09:34:53 2021-10-20 10:07:43 0:32:50 0:23:53 0:08:57 smithi master rhel 8.4 orch:cephadm/smoke-roleless/{0-distro/rhel_8.4_container_tools_rhel8 0-nvme-loop 1-start 2-services/mirror 3-final} 2
Failure Reason:

reached maximum tries (120) after waiting for 120 seconds

fail 6451823 2021-10-20 09:34:13 2021-10-20 09:34:55 2021-10-20 10:09:04 0:34:09 0:25:10 0:08:59 smithi master rhel 8.4 orch:cephadm/with-work/{0-distro/rhel_8.4_container_tools_3.0 fixed-2 mode/root mon_election/connectivity msgr/async-v2only start tasks/rados_python} 2
Failure Reason:

Command failed on smithi132 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:c2bb1d4b9d4d3d2fda9fac76bf742e446c7ca42e shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 291d478e-318d-11ec-8c28-001a4aab830c -- ceph orch daemon add osd smithi132:vg_nvme/lv_4'

fail 6451824 2021-10-20 09:34:14 2021-10-20 09:34:55 2021-10-20 09:59:43 0:24:48 0:15:08 0:09:40 smithi master rhel 8.4 orch:cephadm/smoke/{0-nvme-loop distro/rhel_8.4_container_tools_3.0 fixed-2 mon_election/classic start} 2
Failure Reason:

Command failed on smithi117 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:c2bb1d4b9d4d3d2fda9fac76bf742e446c7ca42e shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid cbf0ad86-318b-11ec-8c28-001a4aab830c -- ceph orch daemon add osd smithi117:/dev/nvme4n1'

fail 6451825 2021-10-20 09:34:15 2021-10-20 09:34:55 2021-10-20 10:11:23 0:36:28 0:26:23 0:10:05 smithi master rhel 8.4 orch:cephadm/with-work/{0-distro/rhel_8.4_container_tools_rhel8 fixed-2 mode/packaged mon_election/classic msgr/async start tasks/rados_api_tests} 2
Failure Reason:

Command failed on smithi037 with status 22: 'sudo cephadm --image quay.ceph.io/ceph-ci/ceph:c2bb1d4b9d4d3d2fda9fac76bf742e446c7ca42e shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 58ca3424-318d-11ec-8c28-001a4aab830c -- ceph orch daemon add osd smithi037:vg_nvme/lv_4'

pass 6451826 2021-10-20 09:34:15 2021-10-20 09:34:56 2021-10-20 09:56:04 0:21:08 0:09:10 0:11:58 smithi master centos 8.2 orch:cephadm/workunits/{0-distro/centos_8.2_container_tools_3.0 mon_election/classic task/test_cephadm_repos} 1
fail 6451827 2021-10-20 09:34:16 2021-10-20 09:34:56 2021-10-20 10:12:28 0:37:32 0:24:31 0:13:01 smithi master ubuntu 20.04 orch:cephadm/smoke-roleless/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-services/nfs-ingress-rgw 3-final} 2
Failure Reason:

reached maximum tries (120) after waiting for 120 seconds

fail 6451828 2021-10-20 09:34:17 2021-10-20 09:34:57 2021-10-20 10:03:50 0:28:53 0:14:59 0:13:54 smithi master ubuntu 20.04 orch:cephadm/with-work/{0-distro/ubuntu_20.04 fixed-2 mode/root mon_election/connectivity msgr/async-v1only start tasks/rados_python} 2
Failure Reason:

Command failed on smithi110 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:c2bb1d4b9d4d3d2fda9fac76bf742e446c7ca42e shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 03bc0206-318c-11ec-8c28-001a4aab830c -- ceph orch daemon add osd smithi110:vg_nvme/lv_4'

pass 6451829 2021-10-20 09:34:18 2021-10-20 09:34:58 2021-10-20 10:21:53 0:46:55 0:32:58 0:13:57 smithi master centos 8.3 orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.3_container_tools_3.0 conf/{client mds mon osd} overrides/{pg-warn whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
fail 6451830 2021-10-20 09:34:18 2021-10-20 09:34:59 2021-10-20 10:01:36 0:26:37 0:15:43 0:10:54 smithi master centos 8.2 orch:cephadm/thrash/{0-distro/centos_8.2_container_tools_3.0 1-start 2-thrash 3-tasks/small-objects fixed-2 msgr/async root} 2
Failure Reason:

Command failed on smithi031 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:c2bb1d4b9d4d3d2fda9fac76bf742e446c7ca42e shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f6ed6966-318b-11ec-8c28-001a4aab830c -- ceph orch daemon add osd smithi031:vg_nvme/lv_4'

pass 6451831 2021-10-20 09:34:19 2021-10-20 09:34:59 2021-10-20 10:24:06 0:49:07 0:33:12 0:15:55 smithi master ubuntu 20.04 orch:cephadm/upgrade/{1-start-distro/1-start-ubuntu_20.04-15.2.9 2-repo_digest/repo_digest 3-start-upgrade 4-wait 5-upgrade-ls mon_election/connectivity} 2
fail 6451832 2021-10-20 09:34:20 2021-10-20 09:34:59 2021-10-20 10:04:24 0:29:25 0:16:09 0:13:16 smithi master centos 8.2 orch:cephadm/with-work/{0-distro/centos_8.2_container_tools_3.0 fixed-2 mode/packaged mon_election/classic msgr/async-v2only start tasks/rados_api_tests} 2
Failure Reason:

Command failed on smithi094 with status 22: 'sudo cephadm --image quay.ceph.io/ceph-ci/ceph:c2bb1d4b9d4d3d2fda9fac76bf742e446c7ca42e shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 4223ffa8-318c-11ec-8c28-001a4aab830c -- ceph orch daemon add osd smithi094:vg_nvme/lv_4'

fail 6451833 2021-10-20 09:34:21 2021-10-20 09:35:00 2021-10-20 10:09:54 0:34:54 0:21:18 0:13:36 smithi master centos 8.2 orch:cephadm/smoke-roleless/{0-distro/centos_8.2_container_tools_3.0 0-nvme-loop 1-start 2-services/nfs-ingress 3-final} 2
Failure Reason:

reached maximum tries (120) after waiting for 120 seconds

fail 6451834 2021-10-20 09:34:22 2021-10-20 09:35:00 2021-10-20 10:01:43 0:26:43 0:15:37 0:11:06 smithi master centos 8.3 orch:cephadm/with-work/{0-distro/centos_8.3_container_tools_3.0 fixed-2 mode/root mon_election/connectivity msgr/async start tasks/rados_python} 2
Failure Reason:

Command failed on smithi145 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:c2bb1d4b9d4d3d2fda9fac76bf742e446c7ca42e shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 0dd3f398-318c-11ec-8c28-001a4aab830c -- ceph orch daemon add osd smithi145:vg_nvme/lv_4'

fail 6451835 2021-10-20 09:34:22 2021-10-20 09:35:00 2021-10-20 09:56:59 0:21:59 0:15:01 0:06:58 smithi master rhel 8.4 orch:cephadm/smoke/{0-nvme-loop distro/rhel_8.4_container_tools_rhel8 fixed-2 mon_election/connectivity start} 2
Failure Reason:

Command failed on smithi019 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:c2bb1d4b9d4d3d2fda9fac76bf742e446c7ca42e shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 78544ad4-318b-11ec-8c28-001a4aab830c -- ceph orch daemon add osd smithi019:/dev/nvme4n1'

fail 6451836 2021-10-20 09:34:23 2021-10-20 09:35:01 2021-10-20 10:11:38 0:36:37 0:26:24 0:10:13 smithi master rhel 8.4 orch:cephadm/with-work/{0-distro/rhel_8.4_container_tools_3.0 fixed-2 mode/packaged mon_election/classic msgr/async-v1only start tasks/rados_api_tests} 2
Failure Reason:

Command failed on smithi016 with status 22: 'sudo cephadm --image quay.ceph.io/ceph-ci/ceph:c2bb1d4b9d4d3d2fda9fac76bf742e446c7ca42e shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 5dc1f7dc-318d-11ec-8c28-001a4aab830c -- ceph orch daemon add osd smithi016:vg_nvme/lv_4'

fail 6451837 2021-10-20 09:34:24 2021-10-20 09:35:02 2021-10-20 10:00:50 0:25:48 0:13:27 0:12:21 smithi master centos 8.2 orch:cephadm/workunits/{0-distro/centos_8.2_container_tools_3.0 mon_election/connectivity task/test_nfs} 1
Failure Reason:

Command failed on smithi036 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:c2bb1d4b9d4d3d2fda9fac76bf742e446c7ca42e shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid ee61aece-318b-11ec-8c28-001a4aab830c -- ceph orch daemon add osd smithi036:vg_nvme/lv_4'

fail 6451838 2021-10-20 09:34:25 2021-10-20 09:35:02 2021-10-20 10:07:36 0:32:34 0:25:02 0:07:32 smithi master rhel 8.4 orch:cephadm/with-work/{0-distro/rhel_8.4_container_tools_rhel8 fixed-2 mode/root mon_election/connectivity msgr/async-v2only start tasks/rados_python} 2
Failure Reason:

Command failed on smithi102 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:c2bb1d4b9d4d3d2fda9fac76bf742e446c7ca42e shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f383887c-318c-11ec-8c28-001a4aab830c -- ceph orch daemon add osd smithi102:vg_nvme/lv_4'

fail 6451839 2021-10-20 09:34:25 2021-10-20 09:35:03 2021-10-20 10:08:12 0:33:09 0:19:37 0:13:32 smithi master centos 8.3 orch:cephadm/smoke-roleless/{0-distro/centos_8.3_container_tools_3.0 0-nvme-loop 1-start 2-services/nfs-ingress2 3-final} 2
Failure Reason:

reached maximum tries (120) after waiting for 120 seconds

fail 6451840 2021-10-20 09:34:26 2021-10-20 09:35:04 2021-10-20 10:17:55 0:42:51 0:29:31 0:13:20 smithi master centos 8.2 orch:cephadm/mgr-nfs-upgrade/{0-centos_8.2_container_tools_3.0 1-bootstrap/16.2.5 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
Failure Reason:

Found coredumps on ubuntu@smithi105.front.sepia.ceph.com

fail 6451841 2021-10-20 09:34:27 2021-10-20 09:35:04 2021-10-20 10:02:54 0:27:50 0:12:22 0:15:28 smithi master ubuntu 20.04 orch:cephadm/with-work/{0-distro/ubuntu_20.04 fixed-2 mode/packaged mon_election/classic msgr/async start tasks/rados_api_tests} 2
Failure Reason:

Command failed on smithi115 with status 22: 'sudo cephadm --image quay.ceph.io/ceph-ci/ceph:c2bb1d4b9d4d3d2fda9fac76bf742e446c7ca42e shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 13f13cd6-318c-11ec-8c28-001a4aab830c -- ceph orch daemon add osd smithi115:vg_nvme/lv_4'

fail 6451842 2021-10-20 09:34:28 2021-10-20 09:35:04 2021-10-20 10:04:27 0:29:23 0:16:00 0:13:23 smithi master centos 8.2 orch:cephadm/thrash/{0-distro/centos_8.2_container_tools_3.0 1-start 2-thrash 3-tasks/snaps-few-objects fixed-2 msgr/async-v1only root} 2
Failure Reason:

Command failed on smithi104 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:c2bb1d4b9d4d3d2fda9fac76bf742e446c7ca42e shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 7b883c78-318c-11ec-8c28-001a4aab830c -- ceph orch daemon add osd smithi104:vg_nvme/lv_4'

fail 6451843 2021-10-20 09:34:28 2021-10-20 09:35:04 2021-10-20 10:05:40 0:30:36 0:17:19 0:13:17 smithi master centos 8.2 orch:cephadm/with-work/{0-distro/centos_8.2_container_tools_3.0 fixed-2 mode/root mon_election/connectivity msgr/async-v1only start tasks/rados_python} 2
Failure Reason:

Command failed on smithi058 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:c2bb1d4b9d4d3d2fda9fac76bf742e446c7ca42e shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 7f51b514-318c-11ec-8c28-001a4aab830c -- ceph orch daemon add osd smithi058:vg_nvme/lv_4'

dead 6451844 2021-10-20 09:34:29 2021-10-20 09:35:05 2021-10-20 21:46:29 12:11:24 smithi master rhel 8.4 orch:cephadm/smoke-roleless/{0-distro/rhel_8.4_container_tools_3.0 0-nvme-loop 1-start 2-services/nfs 3-final} 2
Failure Reason:

hit max job timeout

fail 6451845 2021-10-20 09:34:30 2021-10-20 09:35:05 2021-10-20 10:01:24 0:26:19 0:12:06 0:14:13 smithi master ubuntu 20.04 orch:cephadm/smoke/{0-nvme-loop distro/ubuntu_20.04 fixed-2 mon_election/classic start} 2
Failure Reason:

Command failed on smithi035 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:c2bb1d4b9d4d3d2fda9fac76bf742e446c7ca42e shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid aff32122-318b-11ec-8c28-001a4aab830c -- ceph orch daemon add osd smithi035:/dev/nvme4n1'

fail 6451846 2021-10-20 09:34:31 2021-10-20 09:35:05 2021-10-20 10:05:26 0:30:21 0:17:12 0:13:09 smithi master centos 8.3 orch:cephadm/with-work/{0-distro/centos_8.3_container_tools_3.0 fixed-2 mode/packaged mon_election/classic msgr/async-v2only start tasks/rados_api_tests} 2
Failure Reason:

Command failed on smithi112 with status 22: 'sudo cephadm --image quay.ceph.io/ceph-ci/ceph:c2bb1d4b9d4d3d2fda9fac76bf742e446c7ca42e shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 7a9d2e86-318c-11ec-8c28-001a4aab830c -- ceph orch daemon add osd smithi112:vg_nvme/lv_4'

fail 6451847 2021-10-20 09:34:31 2021-10-20 09:35:06 2021-10-20 09:55:23 0:20:17 0:12:05 0:08:12 smithi master centos 8.2 orch:cephadm/workunits/{0-distro/centos_8.2_container_tools_3.0 mon_election/classic task/test_orch_cli} 1
Failure Reason:

Command failed on smithi050 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:c2bb1d4b9d4d3d2fda9fac76bf742e446c7ca42e shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 5a4ea606-318b-11ec-8c28-001a4aab830c -- ceph orch daemon add osd smithi050:vg_nvme/lv_4'

fail 6451848 2021-10-20 09:34:32 2021-10-20 09:35:06 2021-10-20 10:10:05 0:34:59 0:25:12 0:09:47 smithi master rhel 8.4 orch:cephadm/with-work/{0-distro/rhel_8.4_container_tools_3.0 fixed-2 mode/root mon_election/connectivity msgr/async start tasks/rados_python} 2
Failure Reason:

Command failed on smithi150 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:c2bb1d4b9d4d3d2fda9fac76bf742e446c7ca42e shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 51c506b8-318d-11ec-8c28-001a4aab830c -- ceph orch daemon add osd smithi150:vg_nvme/lv_4'

fail 6451849 2021-10-20 09:34:33 2021-10-20 09:35:06 2021-10-20 10:10:25 0:35:19 0:24:00 0:11:19 smithi master rhel 8.4 orch:cephadm/smoke-roleless/{0-distro/rhel_8.4_container_tools_rhel8 0-nvme-loop 1-start 2-services/nfs2 3-final} 2
Failure Reason:

reached maximum tries (120) after waiting for 120 seconds

fail 6451850 2021-10-20 09:34:34 2021-10-20 09:35:08 2021-10-20 10:09:49 0:34:41 0:25:57 0:08:44 smithi master rhel 8.4 orch:cephadm/with-work/{0-distro/rhel_8.4_container_tools_rhel8 fixed-2 mode/packaged mon_election/classic msgr/async-v1only start tasks/rados_api_tests} 2
Failure Reason:

Command failed on smithi060 with status 22: 'sudo cephadm --image quay.ceph.io/ceph-ci/ceph:c2bb1d4b9d4d3d2fda9fac76bf742e446c7ca42e shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 1087177c-318d-11ec-8c28-001a4aab830c -- ceph orch daemon add osd smithi060:vg_nvme/lv_4'

fail 6451851 2021-10-20 09:34:34 2021-10-20 09:35:08 2021-10-20 10:04:22 0:29:14 0:14:25 0:14:49 smithi master ubuntu 20.04 orch:cephadm/with-work/{0-distro/ubuntu_20.04 fixed-2 mode/root mon_election/connectivity msgr/async-v2only start tasks/rados_python} 2
Failure Reason:

Command failed on smithi082 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:c2bb1d4b9d4d3d2fda9fac76bf742e446c7ca42e shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 1274344e-318c-11ec-8c28-001a4aab830c -- ceph orch daemon add osd smithi082:vg_nvme/lv_4'

fail 6451852 2021-10-20 09:34:35 2021-10-20 09:35:08 2021-10-20 10:12:36 0:37:28 0:23:45 0:13:43 smithi master ubuntu 20.04 orch:cephadm/smoke-roleless/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-services/rgw-ingress 3-final} 2
Failure Reason:

reached maximum tries (120) after waiting for 120 seconds

pass 6451853 2021-10-20 09:34:36 2021-10-20 09:35:08 2021-10-20 10:22:04 0:46:56 0:33:12 0:13:44 smithi master centos 8.3 orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.3_container_tools_3.0 conf/{client mds mon osd} overrides/{pg-warn whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
fail 6451854 2021-10-20 09:34:37 2021-10-20 09:35:09 2021-10-20 09:56:46 0:21:37 0:11:32 0:10:05 smithi master centos 8.2 orch:cephadm/smoke/{0-nvme-loop distro/centos_8.2_container_tools_3.0 fixed-2 mon_election/connectivity start} 2
Failure Reason:

Command failed on smithi045 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:c2bb1d4b9d4d3d2fda9fac76bf742e446c7ca42e shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 5f53c5e6-318b-11ec-8c28-001a4aab830c -- ceph orch daemon add osd smithi045:/dev/nvme4n1'

fail 6451855 2021-10-20 09:34:37 2021-10-20 09:35:09 2021-10-20 09:58:07 0:22:58 0:10:52 0:12:06 smithi master centos 8.3 orch:cephadm/smoke-singlehost/{0-distro$/{centos_8.3_container_tools_3.0} 1-start 2-services/rgw 3-final} 1
Failure Reason:

Command failed on smithi136 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:c2bb1d4b9d4d3d2fda9fac76bf742e446c7ca42e shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid b8302b0a-318b-11ec-8c28-001a4aab830c -- ceph orch daemon add osd smithi136:vg_nvme/lv_4'

fail 6451856 2021-10-20 09:34:38 2021-10-20 09:35:09 2021-10-20 10:04:08 0:28:59 0:14:55 0:14:04 smithi master centos 8.2 orch:cephadm/thrash/{0-distro/centos_8.2_container_tools_3.0 1-start 2-thrash 3-tasks/rados_api_tests fixed-2 msgr/async-v2only root} 2
Failure Reason:

Command failed on smithi192 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:c2bb1d4b9d4d3d2fda9fac76bf742e446c7ca42e shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 699843b4-318c-11ec-8c28-001a4aab830c -- ceph mon dump -f json'

pass 6451857 2021-10-20 09:34:39 2021-10-20 09:35:10 2021-10-20 10:20:17 0:45:07 0:32:50 0:12:17 smithi master ubuntu 20.04 orch:cephadm/upgrade/{1-start-distro/1-start-ubuntu_20.04 2-repo_digest/defaut 3-start-upgrade 4-wait 5-upgrade-ls mon_election/classic} 2
fail 6451858 2021-10-20 09:34:39 2021-10-20 09:35:10 2021-10-20 10:03:03 0:27:53 0:16:22 0:11:31 smithi master centos 8.2 orch:cephadm/with-work/{0-distro/centos_8.2_container_tools_3.0 fixed-2 mode/packaged mon_election/classic msgr/async start tasks/rados_api_tests} 2
Failure Reason:

Command failed on smithi026 with status 22: 'sudo cephadm --image quay.ceph.io/ceph-ci/ceph:c2bb1d4b9d4d3d2fda9fac76bf742e446c7ca42e shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 57297900-318c-11ec-8c28-001a4aab830c -- ceph orch daemon add osd smithi026:vg_nvme/lv_4'

pass 6451859 2021-10-20 09:34:40 2021-10-20 09:35:11 2021-10-20 09:56:14 0:21:03 0:11:50 0:09:13 smithi master centos 8.2 orch:cephadm/workunits/{0-distro/centos_8.2_container_tools_3.0 mon_election/connectivity task/test_adoption} 1
fail 6451860 2021-10-20 09:34:41 2021-10-20 09:35:11 2021-10-20 10:05:46 0:30:35 0:16:10 0:14:25 smithi master centos 8.3 orch:cephadm/with-work/{0-distro/centos_8.3_container_tools_3.0 fixed-2 mode/root mon_election/connectivity msgr/async-v1only start tasks/rados_python} 2
Failure Reason:

Command failed on smithi156 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:c2bb1d4b9d4d3d2fda9fac76bf742e446c7ca42e shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 75cc3d8e-318c-11ec-8c28-001a4aab830c -- ceph orch daemon add osd smithi156:vg_nvme/lv_4'

fail 6451861 2021-10-20 09:34:42 2021-10-20 09:35:12 2021-10-20 10:06:20 0:31:08 0:19:29 0:11:39 smithi master centos 8.2 orch:cephadm/smoke-roleless/{0-distro/centos_8.2_container_tools_3.0 0-nvme-loop 1-start 2-services/rgw 3-final} 2
Failure Reason:

reached maximum tries (120) after waiting for 120 seconds

fail 6451862 2021-10-20 09:34:42 2021-10-20 09:35:12 2021-10-20 10:08:50 0:33:38 0:25:14 0:08:24 smithi master rhel 8.4 orch:cephadm/with-work/{0-distro/rhel_8.4_container_tools_3.0 fixed-2 mode/packaged mon_election/classic msgr/async-v2only start tasks/rados_api_tests} 2
Failure Reason:

Command failed on smithi033 with status 22: 'sudo cephadm --image quay.ceph.io/ceph-ci/ceph:c2bb1d4b9d4d3d2fda9fac76bf742e446c7ca42e shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 1a2737a8-318d-11ec-8c28-001a4aab830c -- ceph orch daemon add osd smithi033:vg_nvme/lv_4'

fail 6451863 2021-10-20 09:34:43 2021-10-20 09:35:14 2021-10-20 10:11:22 0:36:08 0:26:30 0:09:38 smithi master rhel 8.4 orch:cephadm/with-work/{0-distro/rhel_8.4_container_tools_rhel8 fixed-2 mode/root mon_election/connectivity msgr/async start tasks/rados_python} 2
Failure Reason:

Command failed on smithi052 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:c2bb1d4b9d4d3d2fda9fac76bf742e446c7ca42e shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 5abae080-318d-11ec-8c28-001a4aab830c -- ceph orch daemon add osd smithi052:vg_nvme/lv_4'

fail 6451864 2021-10-20 09:34:44 2021-10-20 09:35:14 2021-10-20 10:00:27 0:25:13 0:12:23 0:12:50 smithi master centos 8.3 orch:cephadm/smoke/{0-nvme-loop distro/centos_8.3_container_tools_3.0 fixed-2 mon_election/classic start} 2
Failure Reason:

Command failed on smithi079 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:c2bb1d4b9d4d3d2fda9fac76bf742e446c7ca42e shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid c19dd4b2-318b-11ec-8c28-001a4aab830c -- ceph orch daemon add osd smithi079:/dev/nvme4n1'

fail 6451865 2021-10-20 09:34:45 2021-10-20 09:35:14 2021-10-20 10:09:17 0:34:03 0:20:40 0:13:23 smithi master centos 8.3 orch:cephadm/smoke-roleless/{0-distro/centos_8.3_container_tools_3.0 0-nvme-loop 1-start 2-services/basic 3-final} 2
Failure Reason:

reached maximum tries (120) after waiting for 120 seconds

fail 6451866 2021-10-20 09:34:46 2021-10-20 09:35:14 2021-10-20 10:01:32 0:26:18 0:13:36 0:12:42 smithi master ubuntu 20.04 orch:cephadm/with-work/{0-distro/ubuntu_20.04 fixed-2 mode/packaged mon_election/classic msgr/async-v1only start tasks/rados_api_tests} 2
Failure Reason:

Command failed on smithi097 with status 22: 'sudo cephadm --image quay.ceph.io/ceph-ci/ceph:c2bb1d4b9d4d3d2fda9fac76bf742e446c7ca42e shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid ee87cf5a-318b-11ec-8c28-001a4aab830c -- ceph orch daemon add osd smithi097:vg_nvme/lv_4'

pass 6451867 2021-10-20 09:34:46 2021-10-20 09:35:15 2021-10-20 10:01:32 0:26:17 0:16:50 0:09:27 smithi master centos 8.2 orch:cephadm/workunits/{0-distro/centos_8.2_container_tools_3.0 mon_election/classic task/test_cephadm} 1
fail 6451868 2021-10-20 09:34:48 2021-10-20 09:35:16 2021-10-20 10:04:21 0:29:05 0:16:29 0:12:36 smithi master centos 8.2 orch:cephadm/thrash/{0-distro/centos_8.2_container_tools_3.0 1-start 2-thrash 3-tasks/radosbench fixed-2 msgr/async root} 2
Failure Reason:

Command failed on smithi067 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:c2bb1d4b9d4d3d2fda9fac76bf742e446c7ca42e shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 384b385c-318c-11ec-8c28-001a4aab830c -- ceph orch daemon add osd smithi067:vg_nvme/lv_4'

fail 6451869 2021-10-20 09:34:50 2021-10-20 09:35:17 2021-10-20 10:01:53 0:26:36 0:15:10 0:11:26 smithi master centos 8.2 orch:cephadm/with-work/{0-distro/centos_8.2_container_tools_3.0 fixed-2 mode/root mon_election/connectivity msgr/async-v2only start tasks/rados_python} 2
Failure Reason:

Command failed on smithi111 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:c2bb1d4b9d4d3d2fda9fac76bf742e446c7ca42e shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 1cbc1dcc-318c-11ec-8c28-001a4aab830c -- ceph orch daemon add osd smithi111:vg_nvme/lv_4'

fail 6451870 2021-10-20 09:34:51 2021-10-20 09:35:17 2021-10-20 10:04:41 0:29:24 0:16:00 0:13:24 smithi master centos 8.3 orch:cephadm/with-work/{0-distro/centos_8.3_container_tools_3.0 fixed-2 mode/packaged mon_election/classic msgr/async start tasks/rados_api_tests} 2
Failure Reason:

Command failed on smithi078 with status 22: 'sudo cephadm --image quay.ceph.io/ceph-ci/ceph:c2bb1d4b9d4d3d2fda9fac76bf742e446c7ca42e shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 3ebe3586-318c-11ec-8c28-001a4aab830c -- ceph orch daemon add osd smithi078:vg_nvme/lv_4'

fail 6451871 2021-10-20 09:34:52 2021-10-20 09:35:17 2021-10-20 10:06:48 0:31:31 0:22:20 0:09:11 smithi master rhel 8.4 orch:cephadm/smoke-roleless/{0-distro/rhel_8.4_container_tools_3.0 0-nvme-loop 1-start 2-services/client-keyring 3-final} 2
Failure Reason:

reached maximum tries (120) after waiting for 120 seconds

dead 6451872 2021-10-20 09:34:55 2021-10-20 09:35:17 2021-10-20 21:47:23 12:12:06 smithi master centos 8.2 orch:cephadm/mgr-nfs-upgrade/{0-centos_8.2_container_tools_3.0 1-bootstrap/octopus 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
Failure Reason:

hit max job timeout

fail 6451873 2021-10-20 09:34:56 2021-10-20 09:35:18 2021-10-20 10:12:17 0:36:59 0:24:36 0:12:23 smithi master rhel 8.4 orch:cephadm/with-work/{0-distro/rhel_8.4_container_tools_3.0 fixed-2 mode/root mon_election/connectivity msgr/async-v1only start tasks/rados_python} 2
Failure Reason:

Command failed on smithi085 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:c2bb1d4b9d4d3d2fda9fac76bf742e446c7ca42e shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 94608290-318d-11ec-8c28-001a4aab830c -- ceph orch daemon add osd smithi085:vg_nvme/lv_4'

fail 6451874 2021-10-20 09:34:57 2021-10-20 09:41:29 2021-10-20 10:11:53 0:30:24 0:13:17 0:17:07 smithi master rhel 8.4 orch:cephadm/smoke/{0-nvme-loop distro/rhel_8.4_container_tools_3.0 fixed-2 mon_election/connectivity start} 2
Failure Reason:

Command failed on smithi071 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:c2bb1d4b9d4d3d2fda9fac76bf742e446c7ca42e shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 9a0dc6b2-318d-11ec-8c28-001a4aab830c -- ceph orch daemon add osd smithi071:/dev/nvme4n1'

fail 6451875 2021-10-20 09:34:57 2021-10-20 09:52:40 2021-10-20 10:24:43 0:32:03 0:22:47 0:09:16 smithi master rhel 8.4 orch:cephadm/with-work/{0-distro/rhel_8.4_container_tools_rhel8 fixed-2 mode/packaged mon_election/classic msgr/async-v2only start tasks/rados_api_tests} 2
Failure Reason:

Command failed on smithi087 with status 1: 'sudo cephadm --image quay.ceph.io/ceph-ci/ceph:c2bb1d4b9d4d3d2fda9fac76bf742e446c7ca42e shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 5e7cd546-318f-11ec-8c28-001a4aab830c -- ceph mon dump -f json'

pass 6451876 2021-10-20 09:34:58 2021-10-20 09:56:01 2021-10-20 10:10:13 0:14:12 0:07:26 0:06:46 smithi master centos 8.2 orch:cephadm/workunits/{0-distro/centos_8.2_container_tools_3.0 mon_election/connectivity task/test_cephadm_repos} 1
fail 6451877 2021-10-20 09:35:06 2021-10-20 09:56:02 2021-10-20 10:27:16 0:31:14 0:24:16 0:06:58 smithi master rhel 8.4 orch:cephadm/smoke-roleless/{0-distro/rhel_8.4_container_tools_rhel8 0-nvme-loop 1-start 2-services/iscsi 3-final} 2
Failure Reason:

reached maximum tries (120) after waiting for 120 seconds

fail 6451878 2021-10-20 09:35:07 2021-10-20 09:56:22 2021-10-20 10:20:03 0:23:41 0:13:32 0:10:09 smithi master ubuntu 20.04 orch:cephadm/with-work/{0-distro/ubuntu_20.04 fixed-2 mode/root mon_election/connectivity msgr/async start tasks/rados_python} 2
Failure Reason:

Command failed on smithi045 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:c2bb1d4b9d4d3d2fda9fac76bf742e446c7ca42e shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 1bbf2fb6-318e-11ec-8c28-001a4aab830c -- ceph orch daemon add osd smithi045:vg_nvme/lv_4'

pass 6451879 2021-10-20 09:35:08 2021-10-20 09:56:52 2021-10-20 10:40:02 0:43:10 0:32:43 0:10:27 smithi master centos 8.3 orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.3_container_tools_3.0 conf/{client mds mon osd} overrides/{pg-warn whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
fail 6451880 2021-10-20 09:35:10 2021-10-20 09:57:03 2021-10-20 10:22:13 0:25:10 0:15:24 0:09:46 smithi master centos 8.2 orch:cephadm/thrash/{0-distro/centos_8.2_container_tools_3.0 1-start 2-thrash 3-tasks/small-objects fixed-2 msgr/async-v1only root} 2
Failure Reason:

Command failed on smithi059 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:c2bb1d4b9d4d3d2fda9fac76bf742e446c7ca42e shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 77849f98-318e-11ec-8c28-001a4aab830c -- ceph orch daemon add osd smithi059:vg_nvme/lv_4'

pass 6451881 2021-10-20 09:35:11 2021-10-20 09:57:33 2021-10-20 10:40:15 0:42:42 0:31:12 0:11:30 smithi master centos 8.3 orch:cephadm/upgrade/{1-start-distro/1-start-centos_8.3-octopus 2-repo_digest/repo_digest 3-start-upgrade 4-wait 5-upgrade-ls mon_election/connectivity} 2
fail 6451882 2021-10-20 09:35:12 2021-10-20 09:59:45 2021-10-20 10:22:40 0:22:55 0:14:02 0:08:53 smithi master centos 8.2 orch:cephadm/with-work/{0-distro/centos_8.2_container_tools_3.0 fixed-2 mode/packaged mon_election/connectivity msgr/async-v1only start tasks/rados_api_tests} 2
Failure Reason:

Command failed on smithi041 with status 22: 'sudo cephadm --image quay.ceph.io/ceph-ci/ceph:c2bb1d4b9d4d3d2fda9fac76bf742e446c7ca42e shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d3eb1ffa-318e-11ec-8c28-001a4aab830c -- ceph orch daemon add osd smithi041:vg_nvme/lv_4'

fail 6451883 2021-10-20 09:35:15 2021-10-20 09:59:46 2021-10-20 10:35:50 0:36:04 0:25:17 0:10:47 smithi master ubuntu 20.04 orch:cephadm/smoke-roleless/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-services/mirror 3-final} 2
Failure Reason:

reached maximum tries (120) after waiting for 120 seconds

fail 6451884 2021-10-20 09:35:16 2021-10-20 09:59:57 2021-10-20 10:23:31 0:23:34 0:13:04 0:10:30 smithi master centos 8.3 orch:cephadm/with-work/{0-distro/centos_8.3_container_tools_3.0 fixed-2 mode/root mon_election/classic msgr/async-v2only start tasks/rados_python} 2
Failure Reason:

Command failed on smithi079 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:c2bb1d4b9d4d3d2fda9fac76bf742e446c7ca42e shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 0952aa64-318f-11ec-8c28-001a4aab830c -- ceph orch daemon add osd smithi079:vg_nvme/lv_4'

fail 6451885 2021-10-20 09:35:18 2021-10-20 10:00:37 2021-10-20 10:22:12 0:21:35 0:14:58 0:06:37 smithi master rhel 8.4 orch:cephadm/smoke/{0-nvme-loop distro/rhel_8.4_container_tools_rhel8 fixed-2 mon_election/classic start} 2
Failure Reason:

Command failed on smithi013 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:c2bb1d4b9d4d3d2fda9fac76bf742e446c7ca42e shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d87f3470-318e-11ec-8c28-001a4aab830c -- ceph orch daemon add osd smithi013:/dev/nvme4n1'

fail 6451886 2021-10-20 09:35:19 2021-10-20 10:01:17 2021-10-20 10:33:10 0:31:53 0:23:54 0:07:59 smithi master rhel 8.4 orch:cephadm/with-work/{0-distro/rhel_8.4_container_tools_3.0 fixed-2 mode/packaged mon_election/connectivity msgr/async start tasks/rados_api_tests} 2
Failure Reason:

Command failed on smithi023 with status 22: 'sudo cephadm --image quay.ceph.io/ceph-ci/ceph:c2bb1d4b9d4d3d2fda9fac76bf742e446c7ca42e shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 637750a2-3190-11ec-8c28-001a4aab830c -- ceph orch daemon add osd smithi023:vg_nvme/lv_4'

fail 6451887 2021-10-20 09:35:19 2021-10-20 10:01:28 2021-10-20 10:22:25 0:20:57 0:10:55 0:10:02 smithi master centos 8.2 orch:cephadm/workunits/{0-distro/centos_8.2_container_tools_3.0 mon_election/classic task/test_nfs} 1
Failure Reason:

Command failed on smithi065 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:c2bb1d4b9d4d3d2fda9fac76bf742e446c7ca42e shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid ffba12f8-318e-11ec-8c28-001a4aab830c -- ceph orch daemon add osd smithi065:vg_nvme/lv_4'

fail 6451888 2021-10-20 09:35:20 2021-10-20 10:01:28 2021-10-20 10:30:36 0:29:08 0:18:30 0:10:38 smithi master centos 8.2 orch:cephadm/smoke-roleless/{0-distro/centos_8.2_container_tools_3.0 0-nvme-loop 1-start 2-services/nfs-ingress-rgw 3-final} 2
Failure Reason:

reached maximum tries (120) after waiting for 120 seconds

fail 6451889 2021-10-20 09:35:21 2021-10-20 10:01:28 2021-10-20 10:32:14 0:30:46 0:24:15 0:06:31 smithi master rhel 8.4 orch:cephadm/with-work/{0-distro/rhel_8.4_container_tools_rhel8 fixed-2 mode/root mon_election/classic msgr/async-v1only start tasks/rados_python} 2
Failure Reason:

Command failed on smithi036 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:c2bb1d4b9d4d3d2fda9fac76bf742e446c7ca42e shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 47657de4-3190-11ec-8c28-001a4aab830c -- ceph orch daemon add osd smithi036:vg_nvme/lv_4'

fail 6451890 2021-10-20 09:35:22 2021-10-20 10:01:28 2021-10-20 10:23:22 0:21:54 0:11:08 0:10:46 smithi master ubuntu 20.04 orch:cephadm/with-work/{0-distro/ubuntu_20.04 fixed-2 mode/packaged mon_election/connectivity msgr/async-v2only start tasks/rados_api_tests} 2
Failure Reason:

Command failed on smithi031 with status 22: 'sudo cephadm --image quay.ceph.io/ceph-ci/ceph:c2bb1d4b9d4d3d2fda9fac76bf742e446c7ca42e shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid e9061c8c-318e-11ec-8c28-001a4aab830c -- ceph orch daemon add osd smithi031:vg_nvme/lv_4'

fail 6451891 2021-10-20 09:35:22 2021-10-20 10:01:39 2021-10-20 10:30:31 0:28:52 0:18:35 0:10:17 smithi master centos 8.3 orch:cephadm/smoke-roleless/{0-distro/centos_8.3_container_tools_3.0 0-nvme-loop 1-start 2-services/nfs-ingress 3-final} 2
Failure Reason:

reached maximum tries (120) after waiting for 120 seconds

fail 6451892 2021-10-20 09:35:23 2021-10-20 10:01:39 2021-10-20 10:25:00 0:23:21 0:13:16 0:10:05 smithi master centos 8.2 orch:cephadm/thrash/{0-distro/centos_8.2_container_tools_3.0 1-start 2-thrash 3-tasks/snaps-few-objects fixed-2 msgr/async-v2only root} 2
Failure Reason:

Command failed on smithi145 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:c2bb1d4b9d4d3d2fda9fac76bf742e446c7ca42e shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 3e54dc78-318f-11ec-8c28-001a4aab830c -- ceph orch daemon add osd smithi145:vg_nvme/lv_4'

fail 6451893 2021-10-20 09:35:24 2021-10-20 10:01:49 2021-10-20 10:25:10 0:23:21 0:12:31 0:10:50 smithi master centos 8.2 orch:cephadm/with-work/{0-distro/centos_8.2_container_tools_3.0 fixed-2 mode/root mon_election/classic msgr/async start tasks/rados_python} 2
Failure Reason:

Command failed on smithi111 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:c2bb1d4b9d4d3d2fda9fac76bf742e446c7ca42e shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 3db922ba-318f-11ec-8c28-001a4aab830c -- ceph orch daemon add osd smithi111:vg_nvme/lv_4'

fail 6451894 2021-10-20 09:35:25 2021-10-20 10:02:00 2021-10-20 10:24:28 0:22:28 0:10:24 0:12:04 smithi master ubuntu 20.04 orch:cephadm/smoke/{0-nvme-loop distro/ubuntu_20.04 fixed-2 mon_election/connectivity start} 2
Failure Reason:

Command failed on smithi115 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:c2bb1d4b9d4d3d2fda9fac76bf742e446c7ca42e shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid c1c1de9a-318e-11ec-8c28-001a4aab830c -- ceph orch daemon add osd smithi115:/dev/nvme4n1'

fail 6451895 2021-10-20 09:35:25 2021-10-20 10:03:00 2021-10-20 10:25:29 0:22:29 0:13:19 0:09:10 smithi master centos 8.3 orch:cephadm/with-work/{0-distro/centos_8.3_container_tools_3.0 fixed-2 mode/packaged mon_election/connectivity msgr/async-v1only start tasks/rados_api_tests} 2
Failure Reason:

Command failed on smithi026 with status 22: 'sudo cephadm --image quay.ceph.io/ceph-ci/ceph:c2bb1d4b9d4d3d2fda9fac76bf742e446c7ca42e shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 535f50d0-318f-11ec-8c28-001a4aab830c -- ceph orch daemon add osd smithi026:vg_nvme/lv_4'

fail 6451896 2021-10-20 09:35:26 2021-10-20 10:03:10 2021-10-20 10:22:05 0:18:55 0:10:48 0:08:07 smithi master centos 8.2 orch:cephadm/workunits/{0-distro/centos_8.2_container_tools_3.0 mon_election/connectivity task/test_orch_cli} 1
Failure Reason:

Command failed on smithi195 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:c2bb1d4b9d4d3d2fda9fac76bf742e446c7ca42e shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 34c80f68-318f-11ec-8c28-001a4aab830c -- ceph orch daemon add osd smithi195:vg_nvme/lv_4'

fail 6451897 2021-10-20 09:35:27 2021-10-20 10:03:11 2021-10-20 10:34:30 0:31:19 0:23:47 0:07:32 smithi master rhel 8.4 orch:cephadm/with-work/{0-distro/rhel_8.4_container_tools_3.0 fixed-2 mode/root mon_election/classic msgr/async-v2only start tasks/rados_python} 2
Failure Reason:

Command failed on smithi018 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:c2bb1d4b9d4d3d2fda9fac76bf742e446c7ca42e shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 8e5829a4-3190-11ec-8c28-001a4aab830c -- ceph orch daemon add osd smithi018:vg_nvme/lv_4'

fail 6451898 2021-10-20 09:35:27 2021-10-20 10:03:51 2021-10-20 10:30:47 0:26:56 0:20:35 0:06:21 smithi master rhel 8.4 orch:cephadm/smoke-roleless/{0-distro/rhel_8.4_container_tools_3.0 0-nvme-loop 1-start 2-services/nfs-ingress2 3-final} 2
Failure Reason:

reached maximum tries (120) after waiting for 120 seconds

fail 6451899 2021-10-20 09:35:28 2021-10-20 10:04:01 2021-10-20 10:34:52 0:30:51 0:23:34 0:07:17 smithi master rhel 8.4 orch:cephadm/with-work/{0-distro/rhel_8.4_container_tools_rhel8 fixed-2 mode/packaged mon_election/connectivity msgr/async start tasks/rados_api_tests} 2
Failure Reason:

Command failed on smithi084 with status 22: 'sudo cephadm --image quay.ceph.io/ceph-ci/ceph:c2bb1d4b9d4d3d2fda9fac76bf742e446c7ca42e shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 8e82e748-3190-11ec-8c28-001a4aab830c -- ceph orch daemon add osd smithi084:vg_nvme/lv_4'

fail 6451900 2021-10-20 09:35:29 2021-10-20 10:04:12 2021-10-20 10:26:04 0:21:52 0:10:49 0:11:03 smithi master ubuntu 20.04 orch:cephadm/with-work/{0-distro/ubuntu_20.04 fixed-2 mode/root mon_election/classic msgr/async-v1only start tasks/rados_python} 2
Failure Reason:

Command failed on smithi104 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:c2bb1d4b9d4d3d2fda9fac76bf742e446c7ca42e shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 477fc2cc-318f-11ec-8c28-001a4aab830c -- ceph orch daemon add osd smithi104:vg_nvme/lv_4'