Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
fail 6335802 2021-08-12 15:37:56 2021-08-12 16:20:30 2021-08-12 16:53:34 0:33:04 0:21:36 0:11:28 smithi master centos 8.3 rados/cephadm/smoke-roleless/{0-distro/centos_8.3_kubic_stable 1-start 2-services/rgw 3-final} 2
Failure Reason:

reached maximum tries (180) after waiting for 180 seconds

fail 6335803 2021-08-12 15:37:57 2021-08-12 16:20:30 2021-08-12 16:56:37 0:36:07 0:22:02 0:14:05 smithi master centos 8.3 rados/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/nautilus backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{centos_latest} mon_election/connectivity msgr-failures/few rados thrashers/morepggrow thrashosds-health workloads/rbd_cls} 3
Failure Reason:

Command failed on smithi049 with status 5: 'sudo systemctl stop ceph-1055bc9a-fb8c-11eb-8c24-001a4aab830c@mon.b'

fail 6335804 2021-08-12 15:37:58 2021-08-12 16:20:50 2021-08-12 16:41:37 0:20:47 0:09:48 0:10:59 smithi master ubuntu 20.04 rados/perf/{ceph mon_election/connectivity objectstore/bluestore-stupid openstack scheduler/dmclock_1Shard_16Threads settings/optimized ubuntu_latest workloads/fio_4K_rand_read} 1
dead 6335805 2021-08-12 15:37:59 2021-08-12 16:22:21 2021-08-13 04:31:41 12:09:20 smithi master centos 8.2 rados/cephadm/mgr-nfs-upgrade/{0-centos_8.2_kubic_stable 1-bootstrap/16.2.5 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
Failure Reason:

hit max job timeout

fail 6335806 2021-08-12 15:38:00 2021-08-12 16:22:41 2021-08-12 16:59:49 0:37:08 0:23:12 0:13:56 smithi master ubuntu 20.04 rados/cephadm/smoke/{distro/ubuntu_20.04 fixed-2 mon_election/classic start} 2
Failure Reason:

Command failed on smithi163 with status 5: 'sudo systemctl stop ceph-a255544e-fb8b-11eb-8c24-001a4aab830c@mon.b'

fail 6335807 2021-08-12 15:38:01 2021-08-12 16:23:41 2021-08-12 16:55:31 0:31:50 0:25:24 0:06:26 smithi master rhel 8.3 rados/cephadm/smoke-roleless/{0-distro/rhel_8.3_kubic_stable 1-start 2-services/basic 3-final} 2
Failure Reason:

reached maximum tries (180) after waiting for 180 seconds

fail 6335808 2021-08-12 15:38:03 2021-08-12 16:24:12 2021-08-12 16:53:06 0:28:54 0:23:03 0:05:51 smithi master centos 8.2 rados/cephadm/thrash/{0-distro/centos_8.2_kubic_stable 1-start 2-thrash 3-tasks/rados_api_tests fixed-2 msgr/async root} 2
Failure Reason:

Command failed on smithi083 with status 5: 'sudo systemctl stop ceph-88645ee0-fb8b-11eb-8c24-001a4aab830c@mon.b'

fail 6335809 2021-08-12 15:38:04 2021-08-12 16:24:12 2021-08-12 16:40:10 0:15:58 0:09:26 0:06:32 smithi master centos 8.2 rados/cephadm/workunits/{0-distro/centos_8.2_kubic_stable mon_election/connectivity task/test_orch_cli} 1
Failure Reason:

Command failed on smithi146 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:7546c41ab524b652a8ef9ff4bc8783b116a2b3fb shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 98905508-fb8b-11eb-8c24-001a4aab830c -- ceph-volume lvm zap /dev/vg_nvme/lv_4'

fail 6335810 2021-08-12 15:38:05 2021-08-12 16:41:33 2021-08-12 17:15:25 0:33:52 0:22:02 0:11:50 smithi master ubuntu 20.04 rados/cephadm/smoke-roleless/{0-distro/ubuntu_20.04 1-start 2-services/client-keyring 3-final} 2
Failure Reason:

reached maximum tries (180) after waiting for 180 seconds

fail 6335811 2021-08-12 15:38:07 2021-08-12 16:41:54 2021-08-12 16:58:48 0:16:54 0:10:25 0:06:29 smithi master centos 8.2 rados/dashboard/{centos_8.2_kubic_stable debug/mgr mon_election/classic random-objectstore$/{filestore-xfs} tasks/e2e} 2
Failure Reason:

Command failed on smithi114 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:7546c41ab524b652a8ef9ff4bc8783b116a2b3fb shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f7e0a894-fb8d-11eb-8c24-001a4aab830c -- ceph-volume lvm zap /dev/vg_nvme/lv_4'

fail 6335812 2021-08-12 15:38:08 2021-08-12 16:41:55 2021-08-12 16:58:11 0:16:16 0:06:07 0:10:09 smithi master ubuntu 20.04 rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 1-rook 2-workload/radosbench 3-final cluster/1-node k8s/1.21 net/calico rook/master} 1
Failure Reason:

Command failed on smithi155 with status 1: 'sudo kubeadm init --node-name smithi155 --token abcdef.p1lajateofu62q44 --pod-network-cidr 10.252.208.0/21'

fail 6335813 2021-08-12 15:38:09 2021-08-12 16:42:35 2021-08-12 17:15:27 0:32:52 0:20:11 0:12:41 smithi master centos 8.3 rados/cephadm/smoke-roleless/{0-distro/centos_8.3_kubic_stable 1-start 2-services/iscsi 3-final} 2
Failure Reason:

reached maximum tries (180) after waiting for 180 seconds

fail 6335814 2021-08-12 15:38:11 2021-08-12 16:42:35 2021-08-12 17:01:29 0:18:54 0:10:01 0:08:53 smithi master ubuntu 20.04 rados/perf/{ceph mon_election/classic objectstore/bluestore-basic-min-osd-mem-target openstack scheduler/dmclock_default_shards settings/optimized ubuntu_latest workloads/fio_4K_rand_rw} 1
fail 6335815 2021-08-12 15:38:12 2021-08-12 16:42:35 2021-08-12 17:20:13 0:37:38 0:23:17 0:14:21 smithi master centos 8.3 rados/cephadm/with-work/{0-distro/centos_8.3_kubic_stable fixed-2 mode/packaged mon_election/connectivity msgr/async-v2only start tasks/rados_api_tests} 2
Failure Reason:

Command failed on smithi167 with status 5: 'sudo systemctl stop ceph-47fb64d0-fb8f-11eb-8c24-001a4aab830c@mon.b'

fail 6335816 2021-08-12 15:38:13 2021-08-12 16:42:56 2021-08-12 17:15:31 0:32:35 0:20:16 0:12:19 smithi master centos 8.3 rados/cephadm/smoke/{distro/centos_8.3_kubic_stable fixed-2 mon_election/connectivity start} 2
Failure Reason:

Command failed on smithi043 with status 5: 'sudo systemctl stop ceph-b38f4a8c-fb8e-11eb-8c24-001a4aab830c@mon.b'

fail 6335817 2021-08-12 15:38:14 2021-08-12 16:42:56 2021-08-12 17:02:14 0:19:18 0:08:26 0:10:52 smithi master centos 8.3 rados/cephadm/smoke-singlehost/{0-distro$/{centos_8.3_kubic_stable} 1-start 2-services/rgw 3-final} 1
Failure Reason:

Command failed on smithi071 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:7546c41ab524b652a8ef9ff4bc8783b116a2b3fb shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 64aff9f2-fb8e-11eb-8c24-001a4aab830c -- ceph-volume lvm zap /dev/vg_nvme/lv_4'

fail 6335818 2021-08-12 15:38:15 2021-08-12 16:43:16 2021-08-12 17:12:21 0:29:05 0:21:23 0:07:42 smithi master centos 8.2 rados/cephadm/thrash/{0-distro/centos_8.2_kubic_stable 1-start 2-thrash 3-tasks/radosbench fixed-2 msgr/async-v1only root} 2
Failure Reason:

Command failed on smithi117 with status 5: 'sudo systemctl stop ceph-34db479a-fb8e-11eb-8c24-001a4aab830c@mon.b'

dead 6335819 2021-08-12 15:38:16 2021-08-12 16:43:27 2021-08-13 04:53:21 12:09:54 smithi master ubuntu 20.04 rados/cephadm/upgrade/{1-start-distro/1-start-ubuntu_20.04-15.2.9 2-repo_digest/repo_digest 3-start-upgrade 4-wait mon_election/classic} 2
Failure Reason:

hit max job timeout

fail 6335820 2021-08-12 15:38:17 2021-08-12 16:43:37 2021-08-12 17:17:39 0:34:02 0:26:15 0:07:47 smithi master rhel 8.3 rados/cephadm/smoke-roleless/{0-distro/rhel_8.3_kubic_stable 1-start 2-services/mirror 3-final} 2
Failure Reason:

reached maximum tries (180) after waiting for 180 seconds

fail 6335821 2021-08-12 15:38:18 2021-08-12 16:43:57 2021-08-12 17:02:58 0:19:01 0:10:11 0:08:50 smithi master ubuntu 20.04 rados/perf/{ceph mon_election/connectivity objectstore/bluestore-bitmap openstack scheduler/wpq_default_shards settings/optimized ubuntu_latest workloads/fio_4M_rand_read} 1
fail 6335822 2021-08-12 15:38:19 2021-08-12 16:44:07 2021-08-12 17:20:46 0:36:39 0:21:38 0:15:01 smithi master centos 8.3 rados/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/octopus backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{centos_latest} mon_election/classic msgr-failures/osd-delay rados thrashers/none thrashosds-health workloads/snaps-few-objects} 3
Failure Reason:

Command failed on smithi111 with status 5: 'sudo systemctl stop ceph-8e323a6e-fb8f-11eb-8c24-001a4aab830c@mon.b'

fail 6335823 2021-08-12 15:38:21 2021-08-12 16:45:28 2021-08-12 17:19:30 0:34:02 0:22:02 0:12:00 smithi master ubuntu 20.04 rados/cephadm/smoke-roleless/{0-distro/ubuntu_20.04 1-start 2-services/nfs-ingress-rgw 3-final} 2
Failure Reason:

reached maximum tries (180) after waiting for 180 seconds

fail 6335824 2021-08-12 15:38:22 2021-08-12 16:46:08 2021-08-12 17:09:51 0:23:43 0:15:53 0:07:50 smithi master centos 8.2 rados/cephadm/workunits/{0-distro/centos_8.2_kubic_stable mon_election/classic task/test_cephadm} 1
Failure Reason:

Command failed (workunit test cephadm/test_cephadm.sh) on smithi090 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=7546c41ab524b652a8ef9ff4bc8783b116a2b3fb TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_cephadm.sh'

fail 6335825 2021-08-12 15:38:23 2021-08-12 16:46:08 2021-08-12 17:24:00 0:37:52 0:21:08 0:16:44 smithi master centos 8.3 rados/cephadm/smoke-roleless/{0-distro/centos_8.3_kubic_stable 1-start 2-services/nfs-ingress 3-final} 2
Failure Reason:

reached maximum tries (180) after waiting for 180 seconds

dead 6335826 2021-08-12 15:38:24 2021-08-12 16:51:29 2021-08-13 05:02:04 12:10:35 smithi master centos 8.2 rados/cephadm/mgr-nfs-upgrade/{0-centos_8.2_kubic_stable 1-bootstrap/octopus 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
Failure Reason:

hit max job timeout

fail 6335827 2021-08-12 15:38:25 2021-08-12 16:53:10 2021-08-12 17:24:39 0:31:29 0:25:12 0:06:17 smithi master rhel 8.3 rados/cephadm/smoke/{distro/rhel_8.3_kubic_stable fixed-2 mon_election/classic start} 2
Failure Reason:

Command failed on smithi152 with status 5: 'sudo systemctl stop ceph-17d9c070-fb90-11eb-8c24-001a4aab830c@mon.b'

fail 6335828 2021-08-12 15:38:26 2021-08-12 16:53:20 2021-08-12 17:22:50 0:29:30 0:21:35 0:07:55 smithi master centos 8.2 rados/cephadm/thrash/{0-distro/centos_8.2_kubic_stable 1-start 2-thrash 3-tasks/small-objects fixed-2 msgr/async-v2only root} 2
Failure Reason:

Command failed on smithi175 with status 5: 'sudo systemctl stop ceph-a94a7f00-fb8f-11eb-8c24-001a4aab830c@mon.b'

fail 6335829 2021-08-12 15:38:27 2021-08-12 16:53:40 2021-08-12 17:26:38 0:32:58 0:25:45 0:07:13 smithi master rhel 8.3 rados/cephadm/smoke-roleless/{0-distro/rhel_8.3_kubic_stable 1-start 2-services/nfs-ingress2 3-final} 2
Failure Reason:

reached maximum tries (180) after waiting for 180 seconds

fail 6335830 2021-08-12 15:38:28 2021-08-12 16:53:41 2021-08-12 17:12:41 0:19:00 0:10:07 0:08:53 smithi master ubuntu 20.04 rados/perf/{ceph mon_election/classic objectstore/bluestore-comp openstack scheduler/dmclock_1Shard_16Threads settings/optimized ubuntu_latest workloads/fio_4M_rand_rw} 1
fail 6335831 2021-08-12 15:38:29 2021-08-12 16:53:41 2021-08-12 17:35:02 0:41:21 0:33:51 0:07:30 smithi master rhel 8.3 rados/cephadm/with-work/{0-distro/rhel_8.3_kubic_stable fixed-2 mode/root mon_election/classic msgr/async start tasks/rados_python} 2
Failure Reason:

Command failed on smithi187 with status 5: 'sudo systemctl stop ceph-53f04466-fb91-11eb-8c24-001a4aab830c@mon.b'

dead 6335832 2021-08-12 15:38:30 2021-08-12 16:53:51 2021-08-13 05:04:08 12:10:17 smithi master ubuntu 20.04 rados/cephadm/upgrade/{1-start-distro/1-start-ubuntu_20.04 2-repo_digest/defaut 3-start-upgrade 4-wait mon_election/connectivity} 2
Failure Reason:

hit max job timeout

fail 6335833 2021-08-12 15:38:31 2021-08-12 16:54:42 2021-08-12 17:29:29 0:34:47 0:22:33 0:12:14 smithi master ubuntu 20.04 rados/cephadm/smoke-roleless/{0-distro/ubuntu_20.04 1-start 2-services/nfs 3-final} 2
Failure Reason:

reached maximum tries (180) after waiting for 180 seconds

fail 6335834 2021-08-12 15:38:33 2021-08-12 16:55:32 2021-08-12 17:26:44 0:31:12 0:20:05 0:11:07 smithi master centos 8.3 rados/cephadm/smoke-roleless/{0-distro/centos_8.3_kubic_stable 1-start 2-services/nfs2 3-final} 2
Failure Reason:

reached maximum tries (180) after waiting for 180 seconds

fail 6335835 2021-08-12 15:38:34 2021-08-12 16:55:32 2021-08-12 17:27:06 0:31:34 0:20:12 0:11:22 smithi master ubuntu 20.04 rados/cephadm/smoke/{distro/ubuntu_20.04 fixed-2 mon_election/connectivity start} 2
Failure Reason:

Command failed on smithi089 with status 5: 'sudo systemctl stop ceph-eef7297c-fb8f-11eb-8c24-001a4aab830c@mon.b'

fail 6335836 2021-08-12 15:38:35 2021-08-12 16:55:43 2021-08-12 17:25:32 0:29:49 0:21:22 0:08:27 smithi master centos 8.2 rados/cephadm/thrash/{0-distro/centos_8.2_kubic_stable 1-start 2-thrash 3-tasks/snaps-few-objects fixed-2 msgr/async root} 2
Failure Reason:

Command failed on smithi204 with status 5: 'sudo systemctl stop ceph-0b7cb80a-fb90-11eb-8c24-001a4aab830c@mon.b'

fail 6335837 2021-08-12 15:38:36 2021-08-12 16:56:13 2021-08-12 17:15:26 0:19:13 0:09:53 0:09:20 smithi master ubuntu 20.04 rados/perf/{ceph mon_election/connectivity objectstore/bluestore-low-osd-mem-target openstack scheduler/dmclock_default_shards settings/optimized ubuntu_latest workloads/fio_4M_rand_write} 1
fail 6335838 2021-08-12 15:38:37 2021-08-12 16:56:13 2021-08-12 17:32:41 0:36:28 0:21:50 0:14:38 smithi master centos 8.3 rados/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/pacific backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{centos_latest} mon_election/connectivity msgr-failures/fastclose rados thrashers/pggrow thrashosds-health workloads/test_rbd_api} 3
Failure Reason:

Command failed on smithi049 with status 5: 'sudo systemctl stop ceph-26521160-fb91-11eb-8c24-001a4aab830c@mon.b'

fail 6335839 2021-08-12 15:38:38 2021-08-12 16:56:44 2021-08-12 17:11:50 0:15:06 0:09:20 0:05:46 smithi master centos 8.2 rados/cephadm/workunits/{0-distro/centos_8.2_kubic_stable mon_election/classic task/test_orch_cli} 1
Failure Reason:

Command failed on smithi197 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:7546c41ab524b652a8ef9ff4bc8783b116a2b3fb shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 00cf41fc-fb90-11eb-8c24-001a4aab830c -- ceph-volume lvm zap /dev/vg_nvme/lv_4'

fail 6335840 2021-08-12 15:38:39 2021-08-12 16:57:04 2021-08-12 17:30:22 0:33:18 0:25:35 0:07:43 smithi master rhel 8.3 rados/cephadm/smoke-roleless/{0-distro/rhel_8.3_kubic_stable 1-start 2-services/rgw-ingress 3-final} 2
Failure Reason:

reached maximum tries (180) after waiting for 180 seconds

fail 6335841 2021-08-12 15:38:41 2021-08-12 16:57:04 2021-08-12 17:30:31 0:33:27 0:21:50 0:11:37 smithi master ubuntu 20.04 rados/cephadm/smoke-roleless/{0-distro/ubuntu_20.04 1-start 2-services/rgw 3-final} 2
Failure Reason:

reached maximum tries (180) after waiting for 180 seconds

fail 6335842 2021-08-12 15:38:42 2021-08-12 16:57:05 2021-08-12 17:37:54 0:40:49 0:32:55 0:07:54 smithi master rhel 8.3 rados/cephadm/with-work/{0-distro/rhel_8.3_kubic_stable fixed-2 mode/packaged mon_election/connectivity msgr/async-v1only start tasks/rados_api_tests} 2
Failure Reason:

Command failed on smithi178 with status 5: 'sudo systemctl stop ceph-f566e692-fb91-11eb-8c24-001a4aab830c@mon.b'

fail 6335843 2021-08-12 15:38:43 2021-08-12 16:58:25 2021-08-12 17:13:56 0:15:31 0:06:13 0:09:18 smithi master ubuntu 20.04 rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 1-rook 2-workload/radosbench 3-final cluster/1-node k8s/1.21 net/calico rook/1.6.2} 1
Failure Reason:

Command failed on smithi155 with status 1: 'sudo kubeadm init --node-name smithi155 --token abcdef.ofxgw1jenjubqtus --pod-network-cidr 10.252.208.0/21'

dead 6335844 2021-08-12 15:38:44 2021-08-12 16:58:25 2021-08-13 05:07:36 12:09:11 smithi master ubuntu 20.04 rados/upgrade/parallel/{0-distro$/{ubuntu_20.04} 0-start 1-tasks mon_election/classic upgrade-sequence workload/{ec-rados-default rados_api rados_loadgenbig rbd_import_export test_rbd_api test_rbd_python}} 2
Failure Reason:

hit max job timeout

dead 6335845 2021-08-12 15:38:45 2021-08-12 16:58:55 2021-08-13 05:07:08 12:08:13 smithi master centos 8.2 rados/cephadm/mgr-nfs-upgrade/{0-centos_8.2_kubic_stable 1-bootstrap/16.2.4 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
Failure Reason:

hit max job timeout

fail 6335846 2021-08-12 15:38:46 2021-08-12 16:58:56 2021-08-12 17:22:44 0:23:48 0:10:58 0:12:50 smithi master centos 8.3 rados/cephadm/orchestrator_cli/{0-random-distro$/{centos_8.3_kubic_stable} 2-node-mgr orchestrator_cli} 2
Failure Reason:

Test failure: test_device_ls (tasks.mgr.test_orchestrator_cli.TestOrchestratorCli)

fail 6335847 2021-08-12 15:38:47 2021-08-12 16:59:36 2021-08-12 17:31:30 0:31:54 0:19:47 0:12:07 smithi master centos 8.3 rados/cephadm/smoke/{distro/centos_8.3_kubic_stable fixed-2 mon_election/classic start} 2
Failure Reason:

Command failed on smithi163 with status 5: 'sudo systemctl stop ceph-1294918e-fb91-11eb-8c24-001a4aab830c@mon.b'

fail 6335848 2021-08-12 15:38:48 2021-08-12 16:59:57 2021-08-12 17:20:21 0:20:24 0:09:50 0:10:34 smithi master ubuntu 20.04 rados/perf/{ceph mon_election/classic objectstore/bluestore-stupid openstack scheduler/wpq_default_shards settings/optimized ubuntu_latest workloads/radosbench_4K_rand_read} 1
fail 6335849 2021-08-12 15:38:49 2021-08-12 17:01:07 2021-08-12 17:32:47 0:31:40 0:19:54 0:11:46 smithi master centos 8.3 rados/cephadm/smoke-roleless/{0-distro/centos_8.3_kubic_stable 1-start 2-services/basic 3-final} 2
Failure Reason:

reached maximum tries (180) after waiting for 180 seconds

fail 6335850 2021-08-12 15:38:50 2021-08-12 17:01:37 2021-08-12 17:20:57 0:19:20 0:13:22 0:05:58 smithi master rhel 8.3 rados/cephadm/smoke-singlehost/{0-distro$/{rhel_8.3_kubic_stable} 1-start 2-services/basic 3-final} 1
Failure Reason:

Command failed on smithi071 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:7546c41ab524b652a8ef9ff4bc8783b116a2b3fb shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 33fa783e-fb91-11eb-8c24-001a4aab830c -- ceph-volume lvm zap /dev/vg_nvme/lv_4'

fail 6335851 2021-08-12 15:38:51 2021-08-12 17:02:18 2021-08-12 17:32:16 0:29:58 0:21:46 0:08:12 smithi master centos 8.2 rados/cephadm/thrash/{0-distro/centos_8.2_kubic_stable 1-start 2-thrash 3-tasks/rados_api_tests fixed-2 msgr/async-v1only root} 2
Failure Reason:

Command failed on smithi136 with status 5: 'sudo systemctl stop ceph-ffeb6508-fb90-11eb-8c24-001a4aab830c@mon.b'

dead 6335852 2021-08-12 15:38:52 2021-08-12 17:03:18 2021-08-13 05:13:11 12:09:53 smithi master centos 8.3 rados/cephadm/upgrade/{1-start-distro/1-start-centos_8.3-octopus 2-repo_digest/defaut 3-start-upgrade 4-wait mon_election/classic} 2
Failure Reason:

hit max job timeout

fail 6335853 2021-08-12 15:38:53 2021-08-12 17:03:59 2021-08-12 17:38:25 0:34:26 0:24:52 0:09:34 smithi master rhel 8.3 rados/cephadm/smoke-roleless/{0-distro/rhel_8.3_kubic_stable 1-start 2-services/client-keyring 3-final} 2
Failure Reason:

reached maximum tries (180) after waiting for 180 seconds

fail 6335854 2021-08-12 15:38:54 2021-08-12 17:06:50 2021-08-12 17:39:53 0:33:03 0:23:06 0:09:57 smithi master ubuntu 20.04 rados/cephadm/smoke-roleless/{0-distro/ubuntu_20.04 1-start 2-services/iscsi 3-final} 2
Failure Reason:

reached maximum tries (180) after waiting for 180 seconds

fail 6335855 2021-08-12 15:38:57 2021-08-12 17:06:50 2021-08-12 17:27:30 0:20:40 0:09:48 0:10:52 smithi master ubuntu 20.04 rados/perf/{ceph mon_election/connectivity objectstore/bluestore-basic-min-osd-mem-target openstack scheduler/dmclock_1Shard_16Threads settings/optimized ubuntu_latest workloads/radosbench_4K_seq_read} 1
fail 6335856 2021-08-12 15:38:57 2021-08-12 17:08:11 2021-08-12 17:29:04 0:20:53 0:14:47 0:06:06 smithi master centos 8.2 rados/cephadm/workunits/{0-distro/centos_8.2_kubic_stable mon_election/connectivity task/test_cephadm} 1
Failure Reason:

Command failed (workunit test cephadm/test_cephadm.sh) on smithi091 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=7546c41ab524b652a8ef9ff4bc8783b116a2b3fb TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_cephadm.sh'

fail 6335857 2021-08-12 15:38:59 2021-08-12 17:08:11 2021-08-12 17:46:15 0:38:04 0:21:48 0:16:16 smithi master centos 8.3 rados/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/pacific backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{centos_latest} mon_election/classic msgr-failures/few rados thrashers/careful thrashosds-health workloads/cache-snaps} 3
Failure Reason:

Command failed on smithi090 with status 5: 'sudo systemctl stop ceph-ef0796e2-fb92-11eb-8c24-001a4aab830c@mon.b'

fail 6335858 2021-08-12 15:39:00 2021-08-12 17:10:32 2021-08-12 17:41:40 0:31:08 0:24:59 0:06:09 smithi master rhel 8.3 rados/cephadm/smoke/{distro/rhel_8.3_kubic_stable fixed-2 mon_election/connectivity start} 2
Failure Reason:

Command failed on smithi165 with status 5: 'sudo systemctl stop ceph-8062d6de-fb92-11eb-8c24-001a4aab830c@mon.b'

fail 6335859 2021-08-12 15:39:01 2021-08-12 17:10:42 2021-08-12 17:40:22 0:29:40 0:21:33 0:08:07 smithi master centos 8.2 rados/cephadm/thrash/{0-distro/centos_8.2_kubic_stable 1-start 2-thrash 3-tasks/radosbench fixed-2 msgr/async-v2only root} 2
Failure Reason:

Command failed on smithi161 with status 5: 'sudo systemctl stop ceph-1ca98a48-fb92-11eb-8c24-001a4aab830c@mon.b'

fail 6335860 2021-08-12 15:39:02 2021-08-12 17:11:13 2021-08-12 17:42:39 0:31:26 0:20:09 0:11:17 smithi master centos 8.3 rados/cephadm/smoke-roleless/{0-distro/centos_8.3_kubic_stable 1-start 2-services/mirror 3-final} 2
Failure Reason:

reached maximum tries (180) after waiting for 180 seconds

fail 6335861 2021-08-12 15:39:03 2021-08-12 17:11:53 2021-08-12 17:47:29 0:35:36 0:23:58 0:11:38 smithi master ubuntu 20.04 rados/cephadm/with-work/{0-distro/ubuntu_20.04 fixed-2 mode/root mon_election/classic msgr/async-v2only start tasks/rados_python} 2
Failure Reason:

Command failed on smithi105 with status 5: 'sudo systemctl stop ceph-7450379c-fb92-11eb-8c24-001a4aab830c@mon.b'

fail 6335862 2021-08-12 15:39:04 2021-08-12 17:12:03 2021-08-12 17:45:19 0:33:16 0:25:57 0:07:19 smithi master rhel 8.3 rados/cephadm/smoke-roleless/{0-distro/rhel_8.3_kubic_stable 1-start 2-services/nfs-ingress-rgw 3-final} 2
Failure Reason:

reached maximum tries (180) after waiting for 180 seconds

dead 6335863 2021-08-12 15:39:05 2021-08-12 17:12:24 2021-08-13 05:21:57 12:09:33 smithi master ubuntu 20.04 rados/cephadm/upgrade/{1-start-distro/1-start-ubuntu_20.04-15.2.9 2-repo_digest/repo_digest 3-start-upgrade 4-wait mon_election/connectivity} 2
Failure Reason:

hit max job timeout

fail 6335864 2021-08-12 15:39:06 2021-08-12 17:12:54 2021-08-12 17:31:54 0:19:00 0:09:51 0:09:09 smithi master ubuntu 20.04 rados/perf/{ceph mon_election/classic objectstore/bluestore-bitmap openstack scheduler/dmclock_default_shards settings/optimized ubuntu_latest workloads/radosbench_4M_rand_read} 1
fail 6335865 2021-08-12 15:39:07 2021-08-12 17:12:54 2021-08-12 17:47:36 0:34:42 0:22:52 0:11:50 smithi master ubuntu 20.04 rados/cephadm/smoke-roleless/{0-distro/ubuntu_20.04 1-start 2-services/nfs-ingress 3-final} 2
Failure Reason:

reached maximum tries (180) after waiting for 180 seconds

dead 6335866 2021-08-12 15:39:08 2021-08-12 17:13:34 2021-08-13 05:22:20 12:08:46 smithi master centos 8.2 rados/cephadm/mgr-nfs-upgrade/{0-centos_8.2_kubic_stable 1-bootstrap/16.2.5 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
Failure Reason:

hit max job timeout

fail 6335867 2021-08-12 15:39:09 2021-08-12 17:13:35 2021-08-12 17:46:24 0:32:49 0:21:11 0:11:38 smithi master ubuntu 20.04 rados/cephadm/smoke/{distro/ubuntu_20.04 fixed-2 mon_election/classic start} 2
Failure Reason:

Command failed on smithi043 with status 5: 'sudo systemctl stop ceph-94b604b2-fb92-11eb-8c24-001a4aab830c@mon.b'

fail 6335868 2021-08-12 15:39:10 2021-08-12 17:15:35 2021-08-12 17:44:18 0:28:43 0:21:45 0:06:58 smithi master centos 8.2 rados/cephadm/thrash/{0-distro/centos_8.2_kubic_stable 1-start 2-thrash 3-tasks/small-objects fixed-2 msgr/async root} 2
Failure Reason:

Command failed on smithi132 with status 5: 'sudo systemctl stop ceph-aa59faf8-fb92-11eb-8c24-001a4aab830c@mon.b'

fail 6335869 2021-08-12 15:39:11 2021-08-12 17:15:36 2021-08-12 17:46:45 0:31:09 0:20:18 0:10:51 smithi master centos 8.3 rados/cephadm/smoke-roleless/{0-distro/centos_8.3_kubic_stable 1-start 2-services/nfs-ingress2 3-final} 2
Failure Reason:

reached maximum tries (180) after waiting for 180 seconds

fail 6335870 2021-08-12 15:39:12 2021-08-12 17:15:36 2021-08-12 17:32:05 0:16:29 0:09:39 0:06:50 smithi master centos 8.2 rados/cephadm/workunits/{0-distro/centos_8.2_kubic_stable mon_election/connectivity task/test_orch_cli} 1
Failure Reason:

Command failed on smithi058 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:7546c41ab524b652a8ef9ff4bc8783b116a2b3fb shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 99939292-fb92-11eb-8c24-001a4aab830c -- ceph-volume lvm zap /dev/vg_nvme/lv_4'

fail 6335871 2021-08-12 15:39:13 2021-08-12 17:15:36 2021-08-12 17:34:46 0:19:10 0:09:39 0:09:31 smithi master ubuntu 20.04 rados/perf/{ceph mon_election/connectivity objectstore/bluestore-comp openstack scheduler/wpq_default_shards settings/optimized ubuntu_latest workloads/radosbench_4M_seq_read} 1
fail 6335872 2021-08-12 15:39:14 2021-08-12 17:15:36 2021-08-12 17:48:47 0:33:11 0:25:54 0:07:17 smithi master rhel 8.3 rados/cephadm/smoke-roleless/{0-distro/rhel_8.3_kubic_stable 1-start 2-services/nfs 3-final} 2
Failure Reason:

reached maximum tries (180) after waiting for 180 seconds

fail 6335873 2021-08-12 15:39:15 2021-08-12 17:15:37 2021-08-12 17:33:10 0:17:33 0:10:45 0:06:48 smithi master centos 8.2 rados/dashboard/{centos_8.2_kubic_stable debug/mgr mon_election/connectivity random-objectstore$/{bluestore-bitmap} tasks/e2e} 2
Failure Reason:

Command failed on smithi016 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:7546c41ab524b652a8ef9ff4bc8783b116a2b3fb shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid ce100d48-fb92-11eb-8c24-001a4aab830c -- ceph-volume lvm zap /dev/vg_nvme/lv_4'

fail 6335874 2021-08-12 15:39:17 2021-08-12 17:16:37 2021-08-12 17:37:53 0:21:16 0:06:38 0:14:38 smithi master ubuntu 20.04 rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 1-rook 2-workload/none 3-final cluster/3-node k8s/1.21 net/calico rook/master} 3
Failure Reason:

Command failed on smithi103 with status 1: 'sudo kubeadm init --node-name smithi103 --token abcdef.p9zk528sum13w7t2 --pod-network-cidr 10.251.48.0/21'

fail 6335875 2021-08-12 15:39:18 2021-08-12 17:19:58 2021-08-12 17:54:55 0:34:57 0:23:01 0:11:56 smithi master centos 8.3 rados/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/nautilus-v1only backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{centos_latest} mon_election/connectivity msgr-failures/osd-delay rados thrashers/default thrashosds-health workloads/radosbench} 3
Failure Reason:

Command failed on smithi135 with status 5: 'sudo systemctl stop ceph-2376eca6-fb94-11eb-8c24-001a4aab830c@mon.b'

fail 6335876 2021-08-12 15:39:19 2021-08-12 17:19:58 2021-08-12 17:51:44 0:31:46 0:20:32 0:11:14 smithi master ubuntu 20.04 rados/cephadm/smoke-roleless/{0-distro/ubuntu_20.04 1-start 2-services/nfs2 3-final} 2
Failure Reason:

reached maximum tries (180) after waiting for 180 seconds

fail 6335877 2021-08-12 15:39:20 2021-08-12 17:20:18 2021-08-12 17:55:56 0:35:38 0:22:35 0:13:03 smithi master centos 8.3 rados/cephadm/with-work/{0-distro/centos_8.3_kubic_stable fixed-2 mode/packaged mon_election/connectivity msgr/async start tasks/rados_api_tests} 2
Failure Reason:

Command failed on smithi164 with status 5: 'sudo systemctl stop ceph-56ccd3ae-fb94-11eb-8c24-001a4aab830c@mon.b'

fail 6335878 2021-08-12 15:39:21 2021-08-12 17:20:48 2021-08-12 17:52:09 0:31:21 0:19:15 0:12:06 smithi master centos 8.3 rados/cephadm/smoke/{distro/centos_8.3_kubic_stable fixed-2 mon_election/connectivity start} 2
Failure Reason:

Command failed on smithi120 with status 5: 'sudo systemctl stop ceph-f08fdd34-fb93-11eb-8c24-001a4aab830c@mon.b'

fail 6335879 2021-08-12 15:39:22 2021-08-12 17:20:49 2021-08-12 17:36:03 0:15:14 0:06:44 0:08:30 smithi master ubuntu 20.04 rados/cephadm/smoke-singlehost/{0-distro$/{ubuntu_20.04} 1-start 2-services/rgw 3-final} 1
Failure Reason:

Command failed on smithi104 with status 127: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:7546c41ab524b652a8ef9ff4bc8783b116a2b3fb shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 3173ce9c-fb93-11eb-8c24-001a4aab830c -- ceph-volume lvm zap /dev/vg_nvme/lv_4'

fail 6335880 2021-08-12 15:39:23 2021-08-12 17:20:49 2021-08-12 17:49:46 0:28:57 0:21:59 0:06:58 smithi master centos 8.2 rados/cephadm/thrash/{0-distro/centos_8.2_kubic_stable 1-start 2-thrash 3-tasks/snaps-few-objects fixed-2 msgr/async-v1only root} 2
Failure Reason:

Command failed on smithi193 with status 5: 'sudo systemctl stop ceph-6b7e44f0-fb93-11eb-8c24-001a4aab830c@mon.b'

dead 6335881 2021-08-12 15:39:24 2021-08-12 17:20:59 2021-08-13 05:30:28 12:09:29 smithi master ubuntu 20.04 rados/cephadm/upgrade/{1-start-distro/1-start-ubuntu_20.04 2-repo_digest/defaut 3-start-upgrade 4-wait mon_election/classic} 2
Failure Reason:

hit max job timeout

fail 6335882 2021-08-12 15:39:25 2021-08-12 17:21:30 2021-08-12 17:41:46 0:20:16 0:09:45 0:10:31 smithi master ubuntu 20.04 rados/perf/{ceph mon_election/classic objectstore/bluestore-low-osd-mem-target openstack scheduler/dmclock_1Shard_16Threads settings/optimized ubuntu_latest workloads/radosbench_4M_write} 1
fail 6335883 2021-08-12 15:39:26 2021-08-12 17:22:32 2021-08-12 17:53:39 0:31:07 0:19:59 0:11:08 smithi master centos 8.3 rados/cephadm/smoke-roleless/{0-distro/centos_8.3_kubic_stable 1-start 2-services/rgw-ingress 3-final} 2
Failure Reason:

reached maximum tries (180) after waiting for 180 seconds

fail 6335884 2021-08-12 15:39:27 2021-08-12 17:22:53 2021-08-12 17:53:49 0:30:56 0:25:24 0:05:32 smithi master rhel 8.3 rados/cephadm/smoke-roleless/{0-distro/rhel_8.3_kubic_stable 1-start 2-services/rgw 3-final} 2
Failure Reason:

reached maximum tries (180) after waiting for 180 seconds

fail 6335885 2021-08-12 15:39:28 2021-08-12 17:22:53 2021-08-12 17:43:47 0:20:54 0:14:51 0:06:03 smithi master centos 8.2 rados/cephadm/workunits/{0-distro/centos_8.2_kubic_stable mon_election/classic task/test_cephadm} 1
Failure Reason:

Command failed (workunit test cephadm/test_cephadm.sh) on smithi064 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=7546c41ab524b652a8ef9ff4bc8783b116a2b3fb TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_cephadm.sh'

dead 6335886 2021-08-12 15:39:29 2021-08-12 17:22:53 2021-08-13 05:32:59 12:10:06 smithi master centos 8.2 rados/cephadm/mgr-nfs-upgrade/{0-centos_8.2_kubic_stable 1-bootstrap/octopus 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
Failure Reason:

hit max job timeout

fail 6335887 2021-08-12 15:39:30 2021-08-12 17:23:54 2021-08-12 17:56:26 0:32:32 0:26:14 0:06:18 smithi master rhel 8.3 rados/cephadm/smoke/{distro/rhel_8.3_kubic_stable fixed-2 mon_election/classic start} 2
Failure Reason:

Command failed on smithi037 with status 5: 'sudo systemctl stop ceph-5bb076a0-fb94-11eb-8c24-001a4aab830c@mon.b'

fail 6335888 2021-08-12 15:39:31 2021-08-12 17:24:04 2021-08-12 17:58:17 0:34:13 0:22:08 0:12:05 smithi master ubuntu 20.04 rados/cephadm/smoke-roleless/{0-distro/ubuntu_20.04 1-start 2-services/basic 3-final} 2
Failure Reason:

reached maximum tries (180) after waiting for 180 seconds

fail 6335889 2021-08-12 15:39:32 2021-08-12 17:24:44 2021-08-12 17:44:16 0:19:32 0:09:57 0:09:35 smithi master ubuntu 20.04 rados/perf/{ceph mon_election/connectivity objectstore/bluestore-stupid openstack scheduler/dmclock_default_shards settings/optimized ubuntu_latest workloads/radosbench_omap_write} 1
fail 6335890 2021-08-12 15:39:33 2021-08-12 17:24:45 2021-08-12 17:52:44 0:27:59 0:21:08 0:06:51 smithi master centos 8.2 rados/cephadm/thrash/{0-distro/centos_8.2_kubic_stable 1-start 2-thrash 3-tasks/rados_api_tests fixed-2 msgr/async-v2only root} 2
Failure Reason:

Command failed on smithi204 with status 5: 'sudo systemctl stop ceph-23476116-fb94-11eb-8c24-001a4aab830c@mon.b'

fail 6335891 2021-08-12 15:39:34 2021-08-12 17:25:35 2021-08-12 17:57:45 0:32:10 0:20:23 0:11:47 smithi master centos 8.3 rados/cephadm/smoke-roleless/{0-distro/centos_8.3_kubic_stable 1-start 2-services/client-keyring 3-final} 2
Failure Reason:

reached maximum tries (180) after waiting for 180 seconds

fail 6335892 2021-08-12 15:39:35 2021-08-12 17:26:45 2021-08-12 18:02:01 0:35:16 0:22:21 0:12:55 smithi master centos 8.3 rados/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/nautilus-v2only backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{centos_latest} mon_election/classic msgr-failures/fastclose rados thrashers/mapgap thrashosds-health workloads/rbd_cls} 3
Failure Reason:

Command failed on smithi022 with status 5: 'sudo systemctl stop ceph-527bf996-fb95-11eb-8c24-001a4aab830c@mon.b'

fail 6335893 2021-08-12 15:39:36 2021-08-12 17:27:16 2021-08-12 18:02:33 0:35:17 0:23:23 0:11:54 smithi master centos 8.3 rados/cephadm/with-work/{0-distro/centos_8.3_kubic_stable fixed-2 mode/root mon_election/classic msgr/async-v1only start tasks/rados_python} 2
Failure Reason:

Command failed on smithi184 with status 5: 'sudo systemctl stop ceph-3f38b81a-fb95-11eb-8c24-001a4aab830c@mon.b'

dead 6335894 2021-08-12 15:39:37 2021-08-12 17:27:36 2021-08-13 05:38:39 12:11:03 smithi master centos 8.3 rados/cephadm/upgrade/{1-start-distro/1-start-centos_8.3-octopus 2-repo_digest/repo_digest 3-start-upgrade 4-wait mon_election/connectivity} 2
Failure Reason:

hit max job timeout

fail 6335895 2021-08-12 15:39:38 2021-08-12 17:29:37 2021-08-12 18:03:22 0:33:45 0:25:33 0:08:12 smithi master rhel 8.3 rados/cephadm/smoke-roleless/{0-distro/rhel_8.3_kubic_stable 1-start 2-services/iscsi 3-final} 2
Failure Reason:

reached maximum tries (180) after waiting for 180 seconds

fail 6335896 2021-08-12 15:39:39 2021-08-12 17:30:27 2021-08-12 18:01:52 0:31:25 0:21:27 0:09:58 smithi master ubuntu 20.04 rados/cephadm/smoke/{distro/ubuntu_20.04 fixed-2 mon_election/connectivity start} 2
Failure Reason:

Command failed on smithi109 with status 5: 'sudo systemctl stop ceph-be218ad6-fb94-11eb-8c24-001a4aab830c@mon.b'

fail 6335897 2021-08-12 15:39:40 2021-08-12 17:30:37 2021-08-12 17:49:40 0:19:03 0:09:49 0:09:14 smithi master ubuntu 20.04 rados/perf/{ceph mon_election/classic objectstore/bluestore-basic-min-osd-mem-target openstack scheduler/wpq_default_shards settings/optimized ubuntu_latest workloads/sample_fio} 1
fail 6335898 2021-08-12 15:39:41 2021-08-12 17:30:37 2021-08-12 18:01:02 0:30:25 0:21:20 0:09:05 smithi master centos 8.2 rados/cephadm/thrash/{0-distro/centos_8.2_kubic_stable 1-start 2-thrash 3-tasks/radosbench fixed-2 msgr/async root} 2
Failure Reason:

Command failed on smithi163 with status 5: 'sudo systemctl stop ceph-fe4d0586-fb94-11eb-8c24-001a4aab830c@mon.b'

fail 6335899 2021-08-12 15:39:42 2021-08-12 17:31:38 2021-08-12 18:05:00 0:33:22 0:21:52 0:11:30 smithi master ubuntu 20.04 rados/cephadm/smoke-roleless/{0-distro/ubuntu_20.04 1-start 2-services/mirror 3-final} 2
Failure Reason:

reached maximum tries (180) after waiting for 180 seconds

fail 6335900 2021-08-12 15:39:43 2021-08-12 17:32:08 2021-08-12 17:49:03 0:16:55 0:09:29 0:07:26 smithi master centos 8.2 rados/cephadm/workunits/{0-distro/centos_8.2_kubic_stable mon_election/classic task/test_orch_cli} 1
Failure Reason:

Command failed on smithi136 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:7546c41ab524b652a8ef9ff4bc8783b116a2b3fb shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid efbe388c-fb94-11eb-8c24-001a4aab830c -- ceph-volume lvm zap /dev/vg_nvme/lv_4'

fail 6335901 2021-08-12 15:39:44 2021-08-12 17:32:18 2021-08-12 18:03:44 0:31:26 0:19:53 0:11:33 smithi master centos 8.3 rados/cephadm/smoke-roleless/{0-distro/centos_8.3_kubic_stable 1-start 2-services/nfs-ingress-rgw 3-final} 2
Failure Reason:

reached maximum tries (180) after waiting for 180 seconds

fail 6335902 2021-08-12 15:39:45 2021-08-12 17:32:49 2021-08-12 18:06:18 0:33:29 0:25:48 0:07:41 smithi master rhel 8.3 rados/cephadm/smoke-roleless/{0-distro/rhel_8.3_kubic_stable 1-start 2-services/nfs-ingress 3-final} 2
Failure Reason:

reached maximum tries (180) after waiting for 180 seconds

fail 6335903 2021-08-12 15:39:46 2021-08-12 17:32:50 2021-08-12 17:50:12 0:17:22 0:06:35 0:10:47 smithi master ubuntu 20.04 rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 1-rook 2-workload/radosbench 3-final cluster/3-node k8s/1.21 net/calico rook/1.6.2} 3
Failure Reason:

Command failed on smithi016 with status 1: 'sudo kubeadm init --node-name smithi016 --token abcdef.i9a7msf1jn50kvy9 --pod-network-cidr 10.248.120.0/21'

dead 6335904 2021-08-12 15:39:47 2021-08-12 17:33:20 2021-08-13 05:43:50 12:10:30 smithi master centos 8.3 rados/upgrade/parallel/{0-distro$/{centos_8.3_kubic_stable} 0-start 1-tasks mon_election/connectivity upgrade-sequence workload/{ec-rados-default rados_api rados_loadgenbig rbd_import_export test_rbd_api test_rbd_python}} 2
Failure Reason:

hit max job timeout

fail 6335905 2021-08-12 15:39:48 2021-08-12 17:34:51 2021-08-12 18:14:12 0:39:21 0:32:32 0:06:49 smithi master rhel 8.3 rados/cephadm/with-work/{0-distro/rhel_8.3_kubic_stable fixed-2 mode/packaged mon_election/connectivity msgr/async-v2only start tasks/rados_api_tests} 2
Failure Reason:

Command failed on smithi187 with status 5: 'sudo systemctl stop ceph-08b0ed06-fb97-11eb-8c24-001a4aab830c@mon.b'

fail 6335906 2021-08-12 15:39:49 2021-08-12 17:35:11 2021-08-12 22:31:15 4:56:04 4:41:16 0:14:48 smithi master centos 8.3 rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-comp-zstd rados tasks/rados_cls_all validater/valgrind} 2
Failure Reason:

Command failed (workunit test cls/test_cls_rgw.sh) on smithi131 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=7546c41ab524b652a8ef9ff4bc8783b116a2b3fb TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_rgw.sh'

dead 6335907 2021-08-12 15:39:50 2021-08-12 18:14:37 2021-08-13 06:24:41 12:10:04 smithi master centos 8.2 rados/cephadm/mgr-nfs-upgrade/{0-centos_8.2_kubic_stable 1-bootstrap/16.2.4 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
Failure Reason:

hit max job timeout

fail 6335908 2021-08-12 15:39:51 2021-08-12 18:20:18 2021-08-12 18:39:09 0:18:51 0:09:48 0:09:03 smithi master ubuntu 20.04 rados/perf/{ceph mon_election/connectivity objectstore/bluestore-bitmap openstack scheduler/dmclock_1Shard_16Threads settings/optimized ubuntu_latest workloads/sample_radosbench} 1
fail 6335909 2021-08-12 15:39:52 2021-08-12 18:20:19 2021-08-12 18:49:19 0:29:00 0:22:28 0:06:32 smithi master rhel 8.3 rados/cephadm/orchestrator_cli/{0-random-distro$/{rhel_8.3_kubic_stable} 2-node-mgr orchestrator_cli} 2
Failure Reason:

Test failure: test_device_ls (tasks.mgr.test_orchestrator_cli.TestOrchestratorCli)

dead 6335910 2021-08-12 15:39:54 2021-08-12 18:20:29 2021-08-13 06:30:33 12:10:04 smithi master centos 8.3 rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/connectivity msgr-failures/few objectstore/bluestore-hybrid rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{centos_8} thrashers/mapgap thrashosds-health workloads/ec-rados-plugin=jerasure-k=4-m=2} 3
Failure Reason:

hit max job timeout

fail 6335911 2021-08-12 15:39:56 2021-08-12 18:21:19 2021-08-12 18:52:55 0:31:36 0:19:17 0:12:19 smithi master centos 8.3 rados/cephadm/smoke/{distro/centos_8.3_kubic_stable fixed-2 mon_election/classic start} 2
Failure Reason:

Command failed on smithi135 with status 5: 'sudo systemctl stop ceph-6974b60e-fb9c-11eb-8c24-001a4aab830c@mon.b'

fail 6335912 2021-08-12 15:39:58 2021-08-12 18:22:00 2021-08-12 18:57:18 0:35:18 0:22:04 0:13:14 smithi master centos 8.3 rados/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/nautilus backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{centos_latest} mon_election/connectivity msgr-failures/few rados thrashers/morepggrow thrashosds-health workloads/snaps-few-objects} 3
Failure Reason:

Command failed on smithi071 with status 5: 'sudo systemctl stop ceph-d1a557c4-fb9c-11eb-8c24-001a4aab830c@mon.b'

fail 6335913 2021-08-12 15:39:59 2021-08-12 18:22:20 2021-08-12 18:40:06 0:17:46 0:06:49 0:10:57 smithi master ubuntu 20.04 rados/cephadm/smoke-singlehost/{0-distro$/{ubuntu_20.04} 1-start 2-services/basic 3-final} 1
Failure Reason:

Command failed on smithi204 with status 127: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:7546c41ab524b652a8ef9ff4bc8783b116a2b3fb shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid e8555768-fb9b-11eb-8c24-001a4aab830c -- ceph-volume lvm zap /dev/vg_nvme/lv_4'

fail 6335914 2021-08-12 15:40:00 2021-08-12 18:22:30 2021-08-12 18:52:15 0:29:45 0:21:25 0:08:20 smithi master centos 8.2 rados/cephadm/thrash/{0-distro/centos_8.2_kubic_stable 1-start 2-thrash 3-tasks/small-objects fixed-2 msgr/async-v1only root} 2
Failure Reason:

Command failed on smithi037 with status 5: 'sudo systemctl stop ceph-2d33b03c-fb9c-11eb-8c24-001a4aab830c@mon.b'

dead 6335915 2021-08-12 15:40:01 2021-08-12 18:23:41 2021-08-13 06:34:22 12:10:41 smithi master ubuntu 20.04 rados/cephadm/upgrade/{1-start-distro/1-start-ubuntu_20.04-15.2.9 2-repo_digest/defaut 3-start-upgrade 4-wait mon_election/classic} 2
Failure Reason:

hit max job timeout

fail 6335916 2021-08-12 15:40:02 2021-08-12 18:25:11 2021-08-12 18:57:07 0:31:56 0:21:30 0:10:26 smithi master ubuntu 20.04 rados/cephadm/smoke-roleless/{0-distro/ubuntu_20.04 1-start 2-services/nfs-ingress2 3-final} 2
Failure Reason:

reached maximum tries (180) after waiting for 180 seconds

fail 6335917 2021-08-12 15:40:03 2021-08-12 18:25:42 2021-08-12 18:45:19 0:19:37 0:10:01 0:09:36 smithi master ubuntu 20.04 rados/perf/{ceph mon_election/classic objectstore/bluestore-comp openstack scheduler/dmclock_default_shards settings/optimized ubuntu_latest workloads/fio_4K_rand_read} 1
fail 6335918 2021-08-12 15:40:04 2021-08-12 18:25:42 2021-08-12 18:56:56 0:31:14 0:20:31 0:10:43 smithi master centos 8.3 rados/cephadm/smoke-roleless/{0-distro/centos_8.3_kubic_stable 1-start 2-services/nfs 3-final} 2
Failure Reason:

reached maximum tries (180) after waiting for 180 seconds

fail 6335919 2021-08-12 15:40:05 2021-08-12 18:26:02 2021-08-12 18:48:21 0:22:19 0:14:58 0:07:21 smithi master centos 8.2 rados/cephadm/workunits/{0-distro/centos_8.2_kubic_stable mon_election/connectivity task/test_cephadm} 1
Failure Reason:

Command failed (workunit test cephadm/test_cephadm.sh) on smithi137 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=7546c41ab524b652a8ef9ff4bc8783b116a2b3fb TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_cephadm.sh'

fail 6335920 2021-08-12 15:40:06 2021-08-12 18:27:13 2021-08-12 18:59:58 0:32:45 0:25:19 0:07:26 smithi master rhel 8.3 rados/cephadm/smoke-roleless/{0-distro/rhel_8.3_kubic_stable 1-start 2-services/nfs2 3-final} 2
Failure Reason:

reached maximum tries (180) after waiting for 180 seconds

fail 6335921 2021-08-12 15:40:07 2021-08-12 18:28:53 2021-08-12 19:00:06 0:31:13 0:25:01 0:06:12 smithi master rhel 8.3 rados/cephadm/smoke/{distro/rhel_8.3_kubic_stable fixed-2 mon_election/connectivity start} 2
Failure Reason:

Command failed on smithi109 with status 5: 'sudo systemctl stop ceph-7cebc0c8-fb9d-11eb-8c24-001a4aab830c@mon.b'

fail 6335922 2021-08-12 15:40:08 2021-08-12 18:29:04 2021-08-12 18:59:38 0:30:34 0:21:13 0:09:21 smithi master centos 8.2 rados/cephadm/thrash/{0-distro/centos_8.2_kubic_stable 1-start 2-thrash 3-tasks/snaps-few-objects fixed-2 msgr/async-v2only root} 2
Failure Reason:

Command failed on smithi089 with status 5: 'sudo systemctl stop ceph-3459195a-fb9d-11eb-8c24-001a4aab830c@mon.b'

fail 6335923 2021-08-12 15:40:09 2021-08-12 18:30:44 2021-08-12 19:06:04 0:35:20 0:22:24 0:12:56 smithi master ubuntu 20.04 rados/cephadm/smoke-roleless/{0-distro/ubuntu_20.04 1-start 2-services/rgw-ingress 3-final} 2
Failure Reason:

reached maximum tries (180) after waiting for 180 seconds

fail 6335924 2021-08-12 15:40:10 2021-08-12 18:32:35 2021-08-12 19:08:39 0:36:04 0:24:00 0:12:04 smithi master ubuntu 20.04 rados/cephadm/with-work/{0-distro/ubuntu_20.04 fixed-2 mode/root mon_election/classic msgr/async start tasks/rados_python} 2
Failure Reason:

Command failed on smithi049 with status 5: 'sudo systemctl stop ceph-cbccda10-fb9d-11eb-8c24-001a4aab830c@mon.b'

fail 6335925 2021-08-12 15:40:12 2021-08-12 18:33:15 2021-08-12 18:52:44 0:19:29 0:09:50 0:09:39 smithi master ubuntu 20.04 rados/perf/{ceph mon_election/connectivity objectstore/bluestore-low-osd-mem-target openstack scheduler/wpq_default_shards settings/optimized ubuntu_latest workloads/fio_4K_rand_rw} 1
dead 6335926 2021-08-12 15:40:13 2021-08-12 18:33:15 2021-08-13 06:42:42 12:09:27 smithi master ubuntu 20.04 rados/cephadm/upgrade/{1-start-distro/1-start-ubuntu_20.04 2-repo_digest/repo_digest 3-start-upgrade 4-wait mon_election/connectivity} 2
Failure Reason:

hit max job timeout