User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Pass | Fail | Dead |
---|---|---|---|---|---|---|---|---|---|---|---|
yuriw | 2021-08-11 14:15:52 | 2021-08-11 14:19:29 | 2021-08-12 03:50:02 | 13:30:33 | rados | wip-yuri3-testing-2021-08-09-1006 | smithi | 7546c41 | 12 | 107 | 18 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
fail | 6334575 | 2021-08-11 14:17:24 | 2021-08-11 14:19:29 | 2021-08-11 14:50:13 | 0:30:44 | 0:22:06 | 0:08:38 | smithi | master | centos | 8.3 | rados/cephadm/smoke-roleless/{0-distro/centos_8.3_kubic_stable 1-start 2-services/rgw 3-final} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
fail | 6334577 | 2021-08-11 14:17:25 | 2021-08-11 14:19:31 | 2021-08-11 14:52:50 | 0:33:19 | 0:22:20 | 0:10:59 | smithi | master | centos | 8.3 | rados/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/nautilus backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{centos_latest} mon_election/connectivity msgr-failures/few rados thrashers/morepggrow thrashosds-health workloads/rbd_cls} | 3 | |
Failure Reason:
Command failed on smithi172 with status 5: 'sudo systemctl stop ceph-9c9e5206-fab1-11eb-8c24-001a4aab830c@mon.b' |
||||||||||||||
fail | 6334579 | 2021-08-11 14:17:26 | 2021-08-11 14:19:31 | 2021-08-11 14:38:51 | 0:19:20 | 0:10:46 | 0:08:34 | smithi | master | ubuntu | 20.04 | rados/perf/{ceph mon_election/connectivity objectstore/bluestore-stupid openstack scheduler/dmclock_1Shard_16Threads settings/optimized ubuntu_latest workloads/fio_4K_rand_read} | 1 | |
dead | 6334581 | 2021-08-11 14:17:27 | 2021-08-11 14:19:31 | 2021-08-12 02:30:19 | 12:10:48 | smithi | master | centos | 8.2 | rados/cephadm/mgr-nfs-upgrade/{0-centos_8.2_kubic_stable 1-bootstrap/16.2.5 1-start 2-nfs 3-upgrade-with-workload 4-final} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 6334583 | 2021-08-11 14:17:28 | 2021-08-11 14:21:32 | 2021-08-11 14:53:55 | 0:32:23 | 0:21:25 | 0:10:58 | smithi | master | ubuntu | 20.04 | rados/cephadm/smoke/{distro/ubuntu_20.04 fixed-2 mon_election/classic start} | 2 | |
Failure Reason:
Command failed on smithi151 with status 5: 'sudo systemctl stop ceph-7bb7838c-fab1-11eb-8c24-001a4aab830c@mon.b' |
||||||||||||||
fail | 6334585 | 2021-08-11 14:17:29 | 2021-08-11 14:22:12 | 2021-08-11 14:56:07 | 0:33:55 | 0:25:59 | 0:07:56 | smithi | master | rhel | 8.3 | rados/cephadm/smoke-roleless/{0-distro/rhel_8.3_kubic_stable 1-start 2-services/basic 3-final} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
pass | 6334587 | 2021-08-11 14:17:30 | 2021-08-11 14:23:03 | 2021-08-11 15:52:04 | 1:29:01 | 1:17:23 | 0:11:38 | smithi | master | centos | 8.3 | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async-v1only objectstore/bluestore-hybrid rados tasks/rados_api_tests validater/valgrind} | 2 | |
fail | 6334589 | 2021-08-11 14:17:31 | 2021-08-11 14:24:13 | 2021-08-11 14:59:56 | 0:35:43 | 0:23:10 | 0:12:33 | smithi | master | centos | 8.2 | rados/cephadm/thrash/{0-distro/centos_8.2_kubic_stable 1-start 2-thrash 3-tasks/rados_api_tests fixed-2 msgr/async root} | 2 | |
Failure Reason:
Command failed on smithi160 with status 5: 'sudo systemctl stop ceph-b416beea-fab2-11eb-8c24-001a4aab830c@mon.b' |
||||||||||||||
fail | 6334591 | 2021-08-11 14:17:32 | 2021-08-11 14:27:02 | 2021-08-11 14:47:58 | 0:20:56 | 0:11:24 | 0:09:32 | smithi | master | centos | 8.2 | rados/cephadm/workunits/{0-distro/centos_8.2_kubic_stable mon_election/connectivity task/test_orch_cli} | 1 | |
Failure Reason:
Command failed on smithi074 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:7546c41ab524b652a8ef9ff4bc8783b116a2b3fb shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 91e5b402-fab2-11eb-8c24-001a4aab830c -- ceph-volume lvm zap /dev/vg_nvme/lv_4' |
||||||||||||||
fail | 6334593 | 2021-08-11 14:17:33 | 2021-08-11 14:27:04 | 2021-08-11 15:01:19 | 0:34:15 | 0:22:04 | 0:12:11 | smithi | master | ubuntu | 20.04 | rados/cephadm/smoke-roleless/{0-distro/ubuntu_20.04 1-start 2-services/client-keyring 3-final} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
fail | 6334595 | 2021-08-11 14:17:34 | 2021-08-11 14:27:05 | 2021-08-11 14:50:32 | 0:23:27 | 0:12:32 | 0:10:55 | smithi | master | centos | 8.2 | rados/dashboard/{centos_8.2_kubic_stable debug/mgr mon_election/classic random-objectstore$/{filestore-xfs} tasks/e2e} | 2 | |
Failure Reason:
Command failed on smithi111 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:7546c41ab524b652a8ef9ff4bc8783b116a2b3fb shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid cec48e3e-fab2-11eb-8c24-001a4aab830c -- ceph-volume lvm zap /dev/vg_nvme/lv_4' |
||||||||||||||
fail | 6334597 | 2021-08-11 14:17:35 | 2021-08-11 14:27:05 | 2021-08-11 14:44:04 | 0:16:59 | 0:07:05 | 0:09:54 | smithi | master | ubuntu | 20.04 | rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 1-rook 2-workload/radosbench 3-final cluster/1-node k8s/1.21 net/calico rook/master} | 1 | |
Failure Reason:
Command failed on smithi035 with status 1: 'sudo kubeadm init --node-name smithi035 --token abcdef.9cnh57057yt8ks0x --pod-network-cidr 10.249.16.0/21' |
||||||||||||||
fail | 6334598 | 2021-08-11 14:17:36 | 2021-08-11 14:27:05 | 2021-08-11 14:58:34 | 0:31:29 | 0:21:03 | 0:10:26 | smithi | master | centos | 8.3 | rados/cephadm/smoke-roleless/{0-distro/centos_8.3_kubic_stable 1-start 2-services/iscsi 3-final} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
fail | 6334600 | 2021-08-11 14:17:37 | 2021-08-11 14:27:15 | 2021-08-11 14:46:33 | 0:19:18 | 0:10:45 | 0:08:33 | smithi | master | ubuntu | 20.04 | rados/perf/{ceph mon_election/classic objectstore/bluestore-basic-min-osd-mem-target openstack scheduler/dmclock_default_shards settings/optimized ubuntu_latest workloads/fio_4K_rand_rw} | 1 | |
fail | 6334601 | 2021-08-11 14:17:38 | 2021-08-11 14:27:16 | 2021-08-11 15:01:24 | 0:34:08 | 0:23:23 | 0:10:45 | smithi | master | centos | 8.3 | rados/cephadm/with-work/{0-distro/centos_8.3_kubic_stable fixed-2 mode/packaged mon_election/connectivity msgr/async-v2only start tasks/rados_api_tests} | 2 | |
Failure Reason:
Command failed on smithi178 with status 5: 'sudo systemctl stop ceph-f810ae30-fab2-11eb-8c24-001a4aab830c@mon.b' |
||||||||||||||
fail | 6334602 | 2021-08-11 14:17:39 | 2021-08-11 14:27:56 | 2021-08-11 14:59:23 | 0:31:27 | 0:20:43 | 0:10:44 | smithi | master | centos | 8.3 | rados/cephadm/smoke/{distro/centos_8.3_kubic_stable fixed-2 mon_election/connectivity start} | 2 | |
Failure Reason:
Command failed on smithi125 with status 5: 'sudo systemctl stop ceph-81c697e4-fab2-11eb-8c24-001a4aab830c@mon.b' |
||||||||||||||
fail | 6334604 | 2021-08-11 14:17:39 | 2021-08-11 14:28:46 | 2021-08-11 14:47:46 | 0:19:00 | 0:10:17 | 0:08:43 | smithi | master | centos | 8.3 | rados/cephadm/smoke-singlehost/{0-distro$/{centos_8.3_kubic_stable} 1-start 2-services/rgw 3-final} | 1 | |
Failure Reason:
Command failed on smithi029 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:7546c41ab524b652a8ef9ff4bc8783b116a2b3fb shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 76395ba0-fab2-11eb-8c24-001a4aab830c -- ceph-volume lvm zap /dev/vg_nvme/lv_4' |
||||||||||||||
fail | 6334606 | 2021-08-11 14:17:40 | 2021-08-11 14:28:47 | 2021-08-11 15:02:15 | 0:33:28 | 0:23:41 | 0:09:47 | smithi | master | centos | 8.2 | rados/cephadm/thrash/{0-distro/centos_8.2_kubic_stable 1-start 2-thrash 3-tasks/radosbench fixed-2 msgr/async-v1only root} | 2 | |
Failure Reason:
Command failed on smithi188 with status 5: 'sudo systemctl stop ceph-158aa862-fab3-11eb-8c24-001a4aab830c@mon.b' |
||||||||||||||
dead | 6334608 | 2021-08-11 14:17:41 | 2021-08-11 14:29:07 | 2021-08-12 02:39:06 | 12:09:59 | smithi | master | ubuntu | 20.04 | rados/cephadm/upgrade/{1-start-distro/1-start-ubuntu_20.04-15.2.9 2-repo_digest/repo_digest 3-start-upgrade 4-wait mon_election/classic} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 6334609 | 2021-08-11 14:17:42 | 2021-08-11 14:29:57 | 2021-08-11 15:02:47 | 0:32:50 | 0:25:31 | 0:07:19 | smithi | master | rhel | 8.3 | rados/cephadm/smoke-roleless/{0-distro/rhel_8.3_kubic_stable 1-start 2-services/mirror 3-final} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
fail | 6334611 | 2021-08-11 14:17:43 | 2021-08-11 14:29:58 | 2021-08-11 14:51:17 | 0:21:19 | 0:10:47 | 0:10:32 | smithi | master | ubuntu | 20.04 | rados/perf/{ceph mon_election/connectivity objectstore/bluestore-bitmap openstack scheduler/wpq_default_shards settings/optimized ubuntu_latest workloads/fio_4M_rand_read} | 1 | |
fail | 6334613 | 2021-08-11 14:17:44 | 2021-08-11 14:31:58 | 2021-08-11 15:05:38 | 0:33:40 | 0:22:24 | 0:11:16 | smithi | master | centos | 8.3 | rados/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/octopus backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{centos_latest} mon_election/classic msgr-failures/osd-delay rados thrashers/none thrashosds-health workloads/snaps-few-objects} | 3 | |
Failure Reason:
Command failed on smithi187 with status 5: 'sudo systemctl stop ceph-66942b2a-fab3-11eb-8c24-001a4aab830c@mon.b' |
||||||||||||||
fail | 6334615 | 2021-08-11 14:17:45 | 2021-08-11 14:32:28 | 2021-08-11 15:06:30 | 0:34:02 | 0:22:45 | 0:11:17 | smithi | master | ubuntu | 20.04 | rados/cephadm/smoke-roleless/{0-distro/ubuntu_20.04 1-start 2-services/nfs-ingress-rgw 3-final} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
fail | 6334617 | 2021-08-11 14:17:47 | 2021-08-11 14:32:49 | 2021-08-11 14:59:33 | 0:26:44 | 0:17:14 | 0:09:30 | smithi | master | centos | 8.2 | rados/cephadm/workunits/{0-distro/centos_8.2_kubic_stable mon_election/classic task/test_cephadm} | 1 | |
Failure Reason:
Command failed (workunit test cephadm/test_cephadm.sh) on smithi038 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=7546c41ab524b652a8ef9ff4bc8783b116a2b3fb TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_cephadm.sh' |
||||||||||||||
fail | 6334619 | 2021-08-11 14:17:48 | 2021-08-11 14:32:49 | 2021-08-11 15:05:18 | 0:32:29 | 0:21:06 | 0:11:23 | smithi | master | centos | 8.3 | rados/cephadm/smoke-roleless/{0-distro/centos_8.3_kubic_stable 1-start 2-services/nfs-ingress 3-final} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
dead | 6334621 | 2021-08-11 14:17:49 | 2021-08-11 14:33:59 | 2021-08-12 02:43:28 | 12:09:29 | smithi | master | centos | 8.2 | rados/cephadm/mgr-nfs-upgrade/{0-centos_8.2_kubic_stable 1-bootstrap/octopus 1-start 2-nfs 3-upgrade-with-workload 4-final} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 6334623 | 2021-08-11 14:17:50 | 2021-08-11 14:34:00 | 2021-08-11 15:05:24 | 0:31:24 | 0:25:06 | 0:06:18 | smithi | master | rhel | 8.3 | rados/cephadm/smoke/{distro/rhel_8.3_kubic_stable fixed-2 mon_election/classic start} | 2 | |
Failure Reason:
Command failed on smithi087 with status 5: 'sudo systemctl stop ceph-85ff78a2-fab3-11eb-8c24-001a4aab830c@mon.b' |
||||||||||||||
fail | 6334625 | 2021-08-11 14:17:51 | 2021-08-11 14:34:30 | 2021-08-11 15:09:54 | 0:35:24 | 0:23:18 | 0:12:06 | smithi | master | centos | 8.2 | rados/cephadm/thrash/{0-distro/centos_8.2_kubic_stable 1-start 2-thrash 3-tasks/small-objects fixed-2 msgr/async-v2only root} | 2 | |
Failure Reason:
Command failed on smithi110 with status 5: 'sudo systemctl stop ceph-174d9abe-fab4-11eb-8c24-001a4aab830c@mon.b' |
||||||||||||||
fail | 6334627 | 2021-08-11 14:17:52 | 2021-08-11 14:36:11 | 2021-08-11 15:09:36 | 0:33:25 | 0:26:02 | 0:07:23 | smithi | master | rhel | 8.3 | rados/cephadm/smoke-roleless/{0-distro/rhel_8.3_kubic_stable 1-start 2-services/nfs-ingress2 3-final} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
fail | 6334629 | 2021-08-11 14:17:53 | 2021-08-11 14:36:41 | 2021-08-11 14:55:54 | 0:19:13 | 0:10:53 | 0:08:20 | smithi | master | ubuntu | 20.04 | rados/perf/{ceph mon_election/classic objectstore/bluestore-comp openstack scheduler/dmclock_1Shard_16Threads settings/optimized ubuntu_latest workloads/fio_4M_rand_rw} | 1 | |
fail | 6334631 | 2021-08-11 14:17:54 | 2021-08-11 14:36:42 | 2021-08-11 15:15:46 | 0:39:04 | 0:32:59 | 0:06:05 | smithi | master | rhel | 8.3 | rados/cephadm/with-work/{0-distro/rhel_8.3_kubic_stable fixed-2 mode/root mon_election/classic msgr/async start tasks/rados_python} | 2 | |
Failure Reason:
Command failed on smithi198 with status 5: 'sudo systemctl stop ceph-f17796d6-fab4-11eb-8c24-001a4aab830c@mon.b' |
||||||||||||||
dead | 6334633 | 2021-08-11 14:17:55 | 2021-08-11 14:36:42 | 2021-08-12 02:46:16 | 12:09:34 | smithi | master | ubuntu | 20.04 | rados/cephadm/upgrade/{1-start-distro/1-start-ubuntu_20.04 2-repo_digest/defaut 3-start-upgrade 4-wait mon_election/connectivity} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 6334635 | 2021-08-11 14:17:56 | 2021-08-11 14:36:52 | 2021-08-11 15:10:47 | 0:33:55 | 0:22:41 | 0:11:14 | smithi | master | ubuntu | 20.04 | rados/cephadm/smoke-roleless/{0-distro/ubuntu_20.04 1-start 2-services/nfs 3-final} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
fail | 6334637 | 2021-08-11 14:17:57 | 2021-08-11 14:37:13 | 2021-08-11 15:09:22 | 0:32:09 | 0:21:13 | 0:10:56 | smithi | master | centos | 8.3 | rados/cephadm/smoke-roleless/{0-distro/centos_8.3_kubic_stable 1-start 2-services/nfs2 3-final} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
fail | 6334639 | 2021-08-11 14:17:58 | 2021-08-11 14:38:03 | 2021-08-11 15:10:00 | 0:31:57 | 0:21:51 | 0:10:06 | smithi | master | ubuntu | 20.04 | rados/cephadm/smoke/{distro/ubuntu_20.04 fixed-2 mon_election/connectivity start} | 2 | |
Failure Reason:
Command failed on smithi122 with status 5: 'sudo systemctl stop ceph-b3682168-fab3-11eb-8c24-001a4aab830c@mon.b' |
||||||||||||||
fail | 6334641 | 2021-08-11 14:17:59 | 2021-08-11 14:38:53 | 2021-08-11 15:13:53 | 0:35:00 | 0:22:57 | 0:12:03 | smithi | master | centos | 8.2 | rados/cephadm/thrash/{0-distro/centos_8.2_kubic_stable 1-start 2-thrash 3-tasks/snaps-few-objects fixed-2 msgr/async root} | 2 | |
Failure Reason:
Command failed on smithi144 with status 5: 'sudo systemctl stop ceph-9ca1c9b0-fab4-11eb-8c24-001a4aab830c@mon.b' |
||||||||||||||
fail | 6334643 | 2021-08-11 14:18:00 | 2021-08-11 14:40:54 | 2021-08-11 15:00:12 | 0:19:18 | 0:10:50 | 0:08:28 | smithi | master | ubuntu | 20.04 | rados/perf/{ceph mon_election/connectivity objectstore/bluestore-low-osd-mem-target openstack scheduler/dmclock_default_shards settings/optimized ubuntu_latest workloads/fio_4M_rand_write} | 1 | |
fail | 6334645 | 2021-08-11 14:18:01 | 2021-08-11 14:40:54 | 2021-08-11 15:16:02 | 0:35:08 | 0:22:18 | 0:12:50 | smithi | master | centos | 8.3 | rados/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/pacific backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{centos_latest} mon_election/connectivity msgr-failures/fastclose rados thrashers/pggrow thrashosds-health workloads/test_rbd_api} | 3 | |
Failure Reason:
Command failed on smithi135 with status 5: 'sudo systemctl stop ceph-d60ef3e4-fab4-11eb-8c24-001a4aab830c@mon.b' |
||||||||||||||
fail | 6334647 | 2021-08-11 14:18:02 | 2021-08-11 14:42:55 | 2021-08-11 15:04:30 | 0:21:35 | 0:11:21 | 0:10:14 | smithi | master | centos | 8.2 | rados/cephadm/workunits/{0-distro/centos_8.2_kubic_stable mon_election/classic task/test_orch_cli} | 1 | |
Failure Reason:
Command failed on smithi039 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:7546c41ab524b652a8ef9ff4bc8783b116a2b3fb shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f3b0ca9e-fab4-11eb-8c24-001a4aab830c -- ceph-volume lvm zap /dev/vg_nvme/lv_4' |
||||||||||||||
fail | 6334649 | 2021-08-11 14:18:03 | 2021-08-11 14:42:55 | 2021-08-11 15:14:17 | 0:31:22 | 0:25:05 | 0:06:17 | smithi | master | rhel | 8.3 | rados/cephadm/smoke-roleless/{0-distro/rhel_8.3_kubic_stable 1-start 2-services/rgw-ingress 3-final} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
fail | 6334651 | 2021-08-11 14:18:04 | 2021-08-11 14:42:55 | 2021-08-11 15:17:28 | 0:34:33 | 0:22:43 | 0:11:50 | smithi | master | ubuntu | 20.04 | rados/cephadm/smoke-roleless/{0-distro/ubuntu_20.04 1-start 2-services/rgw 3-final} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
fail | 6334653 | 2021-08-11 14:18:05 | 2021-08-11 14:44:06 | 2021-08-11 15:26:47 | 0:42:41 | 0:33:44 | 0:08:57 | smithi | master | rhel | 8.3 | rados/cephadm/with-work/{0-distro/rhel_8.3_kubic_stable fixed-2 mode/packaged mon_election/connectivity msgr/async-v1only start tasks/rados_api_tests} | 2 | |
Failure Reason:
Command failed on smithi158 with status 5: 'sudo systemctl stop ceph-525737c6-fab6-11eb-8c24-001a4aab830c@mon.b' |
||||||||||||||
pass | 6334654 | 2021-08-11 14:18:06 | 2021-08-11 14:45:06 | 2021-08-11 15:04:03 | 0:18:57 | 0:10:00 | 0:08:57 | smithi | master | centos | 8.stream | rados/objectstore/{backends/alloc-hint supported-random-distro$/{centos_8.stream}} | 1 | |
fail | 6334656 | 2021-08-11 14:18:07 | 2021-08-11 14:45:06 | 2021-08-11 15:02:05 | 0:16:59 | 0:07:09 | 0:09:50 | smithi | master | ubuntu | 20.04 | rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 1-rook 2-workload/radosbench 3-final cluster/1-node k8s/1.21 net/calico rook/1.6.2} | 1 | |
Failure Reason:
Command failed on smithi027 with status 1: 'sudo kubeadm init --node-name smithi027 --token abcdef.fri31041dhy6ebgp --pod-network-cidr 10.248.208.0/21' |
||||||||||||||
dead | 6334658 | 2021-08-11 14:18:08 | 2021-08-11 14:45:06 | 2021-08-12 02:56:41 | 12:11:35 | smithi | master | ubuntu | 20.04 | rados/upgrade/parallel/{0-distro$/{ubuntu_20.04} 0-start 1-tasks mon_election/classic upgrade-sequence workload/{ec-rados-default rados_api rados_loadgenbig rbd_import_export test_rbd_api test_rbd_python}} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
dead | 6334660 | 2021-08-11 14:18:09 | 2021-08-11 14:47:57 | 2021-08-12 02:57:59 | 12:10:02 | smithi | master | centos | 8.2 | rados/cephadm/mgr-nfs-upgrade/{0-centos_8.2_kubic_stable 1-bootstrap/16.2.4 1-start 2-nfs 3-upgrade-with-workload 4-final} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 6334662 | 2021-08-11 14:18:11 | 2021-08-11 14:49:07 | 2021-08-11 15:09:54 | 0:20:47 | 0:11:37 | 0:09:10 | smithi | master | centos | 8.3 | rados/cephadm/orchestrator_cli/{0-random-distro$/{centos_8.3_kubic_stable} 2-node-mgr orchestrator_cli} | 2 | |
Failure Reason:
Test failure: test_device_ls (tasks.mgr.test_orchestrator_cli.TestOrchestratorCli) |
||||||||||||||
fail | 6334665 | 2021-08-11 14:18:12 | 2021-08-11 14:49:08 | 2021-08-11 15:21:04 | 0:31:56 | 0:21:33 | 0:10:23 | smithi | master | centos | 8.3 | rados/cephadm/smoke/{distro/centos_8.3_kubic_stable fixed-2 mon_election/classic start} | 2 | |
Failure Reason:
Command failed on smithi132 with status 5: 'sudo systemctl stop ceph-85de0e4a-fab5-11eb-8c24-001a4aab830c@mon.b' |
||||||||||||||
fail | 6334667 | 2021-08-11 14:18:13 | 2021-08-11 14:50:18 | 2021-08-11 15:12:20 | 0:22:02 | 0:10:46 | 0:11:16 | smithi | master | ubuntu | 20.04 | rados/perf/{ceph mon_election/classic objectstore/bluestore-stupid openstack scheduler/wpq_default_shards settings/optimized ubuntu_latest workloads/radosbench_4K_rand_read} | 1 | |
fail | 6334669 | 2021-08-11 14:18:14 | 2021-08-11 14:50:38 | 2021-08-11 15:22:14 | 0:31:36 | 0:21:03 | 0:10:33 | smithi | master | centos | 8.3 | rados/cephadm/smoke-roleless/{0-distro/centos_8.3_kubic_stable 1-start 2-services/basic 3-final} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
fail | 6334671 | 2021-08-11 14:18:15 | 2021-08-11 14:51:19 | 2021-08-11 15:13:01 | 0:21:42 | 0:14:13 | 0:07:29 | smithi | master | rhel | 8.3 | rados/cephadm/smoke-singlehost/{0-distro$/{rhel_8.3_kubic_stable} 1-start 2-services/basic 3-final} | 1 | |
Failure Reason:
Command failed on smithi057 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:7546c41ab524b652a8ef9ff4bc8783b116a2b3fb shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f1dc0318-fab5-11eb-8c24-001a4aab830c -- ceph-volume lvm zap /dev/vg_nvme/lv_4' |
||||||||||||||
fail | 6334673 | 2021-08-11 14:18:16 | 2021-08-11 14:51:59 | 2021-08-11 15:26:10 | 0:34:11 | 0:22:52 | 0:11:19 | smithi | master | centos | 8.2 | rados/cephadm/thrash/{0-distro/centos_8.2_kubic_stable 1-start 2-thrash 3-tasks/rados_api_tests fixed-2 msgr/async-v1only root} | 2 | |
Failure Reason:
Command failed on smithi193 with status 5: 'sudo systemctl stop ceph-55fe378a-fab6-11eb-8c24-001a4aab830c@mon.b' |
||||||||||||||
dead | 6334675 | 2021-08-11 14:18:17 | 2021-08-11 14:53:00 | 2021-08-12 03:01:06 | 12:08:06 | smithi | master | centos | 8.3 | rados/cephadm/upgrade/{1-start-distro/1-start-centos_8.3-octopus 2-repo_digest/defaut 3-start-upgrade 4-wait mon_election/classic} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 6334677 | 2021-08-11 14:18:18 | 2021-08-11 14:53:00 | 2021-08-11 15:24:45 | 0:31:45 | 0:24:52 | 0:06:53 | smithi | master | rhel | 8.3 | rados/cephadm/smoke-roleless/{0-distro/rhel_8.3_kubic_stable 1-start 2-services/client-keyring 3-final} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
fail | 6334679 | 2021-08-11 14:18:19 | 2021-08-11 14:53:50 | 2021-08-11 15:27:30 | 0:33:40 | 0:22:48 | 0:10:52 | smithi | master | ubuntu | 20.04 | rados/cephadm/smoke-roleless/{0-distro/ubuntu_20.04 1-start 2-services/iscsi 3-final} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
fail | 6334681 | 2021-08-11 14:18:20 | 2021-08-11 14:54:01 | 2021-08-11 15:14:00 | 0:19:59 | 0:10:37 | 0:09:22 | smithi | master | ubuntu | 20.04 | rados/perf/{ceph mon_election/connectivity objectstore/bluestore-basic-min-osd-mem-target openstack scheduler/dmclock_1Shard_16Threads settings/optimized ubuntu_latest workloads/radosbench_4K_seq_read} | 1 | |
fail | 6334683 | 2021-08-11 14:18:21 | 2021-08-11 14:54:31 | 2021-08-11 15:19:13 | 0:24:42 | 0:16:57 | 0:07:45 | smithi | master | centos | 8.2 | rados/cephadm/workunits/{0-distro/centos_8.2_kubic_stable mon_election/connectivity task/test_cephadm} | 1 | |
Failure Reason:
Command failed (workunit test cephadm/test_cephadm.sh) on smithi041 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=7546c41ab524b652a8ef9ff4bc8783b116a2b3fb TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_cephadm.sh' |
||||||||||||||
fail | 6334684 | 2021-08-11 14:18:22 | 2021-08-11 14:54:31 | 2021-08-11 15:29:11 | 0:34:40 | 0:22:14 | 0:12:26 | smithi | master | centos | 8.3 | rados/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/pacific backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{centos_latest} mon_election/classic msgr-failures/few rados thrashers/careful thrashosds-health workloads/cache-snaps} | 3 | |
Failure Reason:
Command failed on smithi166 with status 5: 'sudo systemctl stop ceph-ab83cc92-fab6-11eb-8c24-001a4aab830c@mon.b' |
||||||||||||||
fail | 6334685 | 2021-08-11 14:18:23 | 2021-08-11 14:56:02 | 2021-08-11 15:26:57 | 0:30:55 | 0:25:24 | 0:05:31 | smithi | master | rhel | 8.3 | rados/cephadm/smoke/{distro/rhel_8.3_kubic_stable fixed-2 mon_election/connectivity start} | 2 | |
Failure Reason:
Command failed on smithi067 with status 5: 'sudo systemctl stop ceph-8b07309e-fab6-11eb-8c24-001a4aab830c@mon.b' |
||||||||||||||
fail | 6334686 | 2021-08-11 14:18:24 | 2021-08-11 14:56:12 | 2021-08-11 15:30:40 | 0:34:28 | 0:23:14 | 0:11:14 | smithi | master | centos | 8.2 | rados/cephadm/thrash/{0-distro/centos_8.2_kubic_stable 1-start 2-thrash 3-tasks/radosbench fixed-2 msgr/async-v2only root} | 2 | |
Failure Reason:
Command failed on smithi197 with status 5: 'sudo systemctl stop ceph-edb647b6-fab6-11eb-8c24-001a4aab830c@mon.b' |
||||||||||||||
fail | 6334687 | 2021-08-11 14:18:25 | 2021-08-11 14:57:42 | 2021-08-11 15:29:02 | 0:31:20 | 0:21:05 | 0:10:15 | smithi | master | centos | 8.3 | rados/cephadm/smoke-roleless/{0-distro/centos_8.3_kubic_stable 1-start 2-services/mirror 3-final} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
fail | 6334688 | 2021-08-11 14:18:26 | 2021-08-11 14:57:53 | 2021-08-11 15:32:16 | 0:34:23 | 0:23:21 | 0:11:02 | smithi | master | ubuntu | 20.04 | rados/cephadm/with-work/{0-distro/ubuntu_20.04 fixed-2 mode/root mon_election/classic msgr/async-v2only start tasks/rados_python} | 2 | |
Failure Reason:
Command failed on smithi155 with status 5: 'sudo systemctl stop ceph-cbb7a79a-fab6-11eb-8c24-001a4aab830c@mon.b' |
||||||||||||||
fail | 6334689 | 2021-08-11 14:18:28 | 2021-08-11 14:58:44 | 2021-08-11 15:32:12 | 0:33:28 | 0:26:07 | 0:07:21 | smithi | master | rhel | 8.3 | rados/cephadm/smoke-roleless/{0-distro/rhel_8.3_kubic_stable 1-start 2-services/nfs-ingress-rgw 3-final} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
dead | 6334690 | 2021-08-11 14:18:29 | 2021-08-11 14:59:25 | 2021-08-12 03:08:47 | 12:09:22 | smithi | master | ubuntu | 20.04 | rados/cephadm/upgrade/{1-start-distro/1-start-ubuntu_20.04-15.2.9 2-repo_digest/repo_digest 3-start-upgrade 4-wait mon_election/connectivity} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 6334691 | 2021-08-11 14:18:30 | 2021-08-11 15:00:05 | 2021-08-11 15:19:23 | 0:19:18 | 0:10:45 | 0:08:33 | smithi | master | ubuntu | 20.04 | rados/perf/{ceph mon_election/classic objectstore/bluestore-bitmap openstack scheduler/dmclock_default_shards settings/optimized ubuntu_latest workloads/radosbench_4M_rand_read} | 1 | |
fail | 6334692 | 2021-08-11 14:18:31 | 2021-08-11 15:00:15 | 2021-08-11 15:35:05 | 0:34:50 | 0:23:36 | 0:11:14 | smithi | master | ubuntu | 20.04 | rados/cephadm/smoke-roleless/{0-distro/ubuntu_20.04 1-start 2-services/nfs-ingress 3-final} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
dead | 6334693 | 2021-08-11 14:18:32 | 2021-08-11 15:01:27 | 2021-08-12 03:10:31 | 12:09:04 | smithi | master | centos | 8.2 | rados/cephadm/mgr-nfs-upgrade/{0-centos_8.2_kubic_stable 1-bootstrap/16.2.5 1-start 2-nfs 3-upgrade-with-workload 4-final} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 6334694 | 2021-08-11 14:18:33 | 2021-08-11 15:01:27 | 2021-08-11 15:33:12 | 0:31:45 | 0:20:58 | 0:10:47 | smithi | master | ubuntu | 20.04 | rados/cephadm/smoke/{distro/ubuntu_20.04 fixed-2 mon_election/classic start} | 2 | |
Failure Reason:
Command failed on smithi150 with status 5: 'sudo systemctl stop ceph-f3c7343a-fab6-11eb-8c24-001a4aab830c@mon.b' |
||||||||||||||
fail | 6334695 | 2021-08-11 14:18:34 | 2021-08-11 15:01:37 | 2021-08-11 15:34:52 | 0:33:15 | 0:23:35 | 0:09:40 | smithi | master | centos | 8.2 | rados/cephadm/thrash/{0-distro/centos_8.2_kubic_stable 1-start 2-thrash 3-tasks/small-objects fixed-2 msgr/async root} | 2 | |
Failure Reason:
Command failed on smithi142 with status 5: 'sudo systemctl stop ceph-99fbb34e-fab7-11eb-8c24-001a4aab830c@mon.b' |
||||||||||||||
fail | 6334696 | 2021-08-11 14:18:35 | 2021-08-11 15:01:48 | 2021-08-11 15:32:52 | 0:31:04 | 0:21:23 | 0:09:41 | smithi | master | centos | 8.3 | rados/cephadm/smoke-roleless/{0-distro/centos_8.3_kubic_stable 1-start 2-services/nfs-ingress2 3-final} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
fail | 6334697 | 2021-08-11 14:18:36 | 2021-08-11 15:01:48 | 2021-08-11 15:22:17 | 0:20:29 | 0:11:38 | 0:08:51 | smithi | master | centos | 8.2 | rados/cephadm/workunits/{0-distro/centos_8.2_kubic_stable mon_election/connectivity task/test_orch_cli} | 1 | |
Failure Reason:
Command failed on smithi027 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:7546c41ab524b652a8ef9ff4bc8783b116a2b3fb shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 63e0eafe-fab7-11eb-8c24-001a4aab830c -- ceph-volume lvm zap /dev/vg_nvme/lv_4' |
||||||||||||||
fail | 6334698 | 2021-08-11 14:18:37 | 2021-08-11 15:02:08 | 2021-08-11 15:21:33 | 0:19:25 | 0:10:46 | 0:08:39 | smithi | master | ubuntu | 20.04 | rados/perf/{ceph mon_election/connectivity objectstore/bluestore-comp openstack scheduler/wpq_default_shards settings/optimized ubuntu_latest workloads/radosbench_4M_seq_read} | 1 | |
fail | 6334699 | 2021-08-11 14:18:38 | 2021-08-11 15:02:19 | 2021-08-11 15:33:47 | 0:31:28 | 0:25:19 | 0:06:09 | smithi | master | rhel | 8.3 | rados/cephadm/smoke-roleless/{0-distro/rhel_8.3_kubic_stable 1-start 2-services/nfs 3-final} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
fail | 6334700 | 2021-08-11 14:18:39 | 2021-08-11 15:02:49 | 2021-08-11 15:27:04 | 0:24:15 | 0:12:20 | 0:11:55 | smithi | master | centos | 8.2 | rados/dashboard/{centos_8.2_kubic_stable debug/mgr mon_election/connectivity random-objectstore$/{bluestore-bitmap} tasks/e2e} | 2 | |
Failure Reason:
Command failed on smithi098 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:7546c41ab524b652a8ef9ff4bc8783b116a2b3fb shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d8b07480-fab7-11eb-8c24-001a4aab830c -- ceph-volume lvm zap /dev/vg_nvme/lv_4' |
||||||||||||||
fail | 6334701 | 2021-08-11 14:18:40 | 2021-08-11 15:04:09 | 2021-08-11 15:25:01 | 0:20:52 | 0:07:24 | 0:13:28 | smithi | master | ubuntu | 20.04 | rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 1-rook 2-workload/none 3-final cluster/3-node k8s/1.21 net/calico rook/master} | 3 | |
Failure Reason:
Command failed on smithi039 with status 1: 'sudo kubeadm init --node-name smithi039 --token abcdef.61wehjhayypqpmqw --pod-network-cidr 10.249.48.0/21' |
||||||||||||||
fail | 6334702 | 2021-08-11 14:18:41 | 2021-08-11 15:05:20 | 2021-08-11 15:38:53 | 0:33:33 | 0:22:59 | 0:10:34 | smithi | master | centos | 8.3 | rados/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/nautilus-v1only backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{centos_latest} mon_election/connectivity msgr-failures/osd-delay rados thrashers/default thrashosds-health workloads/radosbench} | 3 | |
Failure Reason:
Command failed on smithi187 with status 5: 'sudo systemctl stop ceph-01934328-fab8-11eb-8c24-001a4aab830c@mon.b' |
||||||||||||||
dead | 6334703 | 2021-08-11 14:18:42 | 2021-08-11 15:05:40 | 2021-08-11 15:20:48 | 0:15:08 | 0:04:04 | 0:11:04 | smithi | master | ubuntu | 20.04 | rados/cephadm/smoke-roleless/{0-distro/ubuntu_20.04 1-start 2-services/nfs2 3-final} | 2 | |
Failure Reason:
{'Failure object was': {'smithi050.front.sepia.ceph.com': {'msg': 'Failed to update apt cache: ', 'invocation': {'module_args': {'dpkg_options': 'force-confdef,force-confold', 'autoremove': False, 'force': False, 'force_apt_get': False, 'policy_rc_d': 'None', 'package': 'None', 'autoclean': False, 'install_recommends': 'None', 'purge': False, 'allow_unauthenticated': False, 'state': 'present', 'upgrade': 'None', 'update_cache': True, 'default_release': 'None', 'only_upgrade': False, 'deb': 'None', 'cache_valid_time': 0}}, '_ansible_no_log': False, 'attempts': 24, 'changed': False}}, 'Traceback (most recent call last)': 'File "/home/teuthworker/src/git.ceph.com_git_ceph-cm-ansible_master/callback_plugins/failure_log.py", line 44, in log_failure log.error(yaml.safe_dump(failure)) File "/home/teuthworker/src/git.ceph.com_git_teuthology_321319b12ea4ff9b63c7655015a3156de2c3b279/virtualenv/lib/python3.6/site-packages/yaml/__init__.py", line 306, in safe_dump return dump_all([data], stream, Dumper=SafeDumper, **kwds) File "/home/teuthworker/src/git.ceph.com_git_teuthology_321319b12ea4ff9b63c7655015a3156de2c3b279/virtualenv/lib/python3.6/site-packages/yaml/__init__.py", line 278, in dump_all dumper.represent(data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_321319b12ea4ff9b63c7655015a3156de2c3b279/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 27, in represent node = self.represent_data(data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_321319b12ea4ff9b63c7655015a3156de2c3b279/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 48, in represent_data node = self.yaml_representers[data_types[0]](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_321319b12ea4ff9b63c7655015a3156de2c3b279/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 207, in represent_dict return self.represent_mapping(\'tag:yaml.org,2002:map\', data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_321319b12ea4ff9b63c7655015a3156de2c3b279/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 118, in represent_mapping node_value = self.represent_data(item_value) File "/home/teuthworker/src/git.ceph.com_git_teuthology_321319b12ea4ff9b63c7655015a3156de2c3b279/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 48, in represent_data node = self.yaml_representers[data_types[0]](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_321319b12ea4ff9b63c7655015a3156de2c3b279/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 207, in represent_dict return self.represent_mapping(\'tag:yaml.org,2002:map\', data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_321319b12ea4ff9b63c7655015a3156de2c3b279/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 117, in represent_mapping node_key = self.represent_data(item_key) File "/home/teuthworker/src/git.ceph.com_git_teuthology_321319b12ea4ff9b63c7655015a3156de2c3b279/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 58, in represent_data node = self.yaml_representers[None](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_321319b12ea4ff9b63c7655015a3156de2c3b279/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 231, in represent_undefined raise RepresenterError("cannot represent an object", data)', 'yaml.representer.RepresenterError': "('cannot represent an object', '_ansible_no_log')"} |
||||||||||||||
fail | 6334704 | 2021-08-11 14:18:43 | 2021-08-11 15:05:41 | 2021-08-11 15:39:27 | 0:33:46 | 0:23:46 | 0:10:00 | smithi | master | centos | 8.3 | rados/cephadm/with-work/{0-distro/centos_8.3_kubic_stable fixed-2 mode/packaged mon_election/connectivity msgr/async start tasks/rados_api_tests} | 2 | |
Failure Reason:
Command failed on smithi171 with status 5: 'sudo systemctl stop ceph-352e3b16-fab8-11eb-8c24-001a4aab830c@mon.b' |
||||||||||||||
fail | 6334705 | 2021-08-11 14:18:44 | 2021-08-11 15:06:31 | 2021-08-11 15:38:49 | 0:32:18 | 0:20:49 | 0:11:29 | smithi | master | centos | 8.3 | rados/cephadm/smoke/{distro/centos_8.3_kubic_stable fixed-2 mon_election/connectivity start} | 2 | |
Failure Reason:
Command failed on smithi170 with status 5: 'sudo systemctl stop ceph-f65b9a1e-fab7-11eb-8c24-001a4aab830c@mon.b' |
||||||||||||||
fail | 6334706 | 2021-08-11 14:18:45 | 2021-08-11 15:07:41 | 2021-08-11 15:24:49 | 0:17:08 | 0:07:41 | 0:09:27 | smithi | master | ubuntu | 20.04 | rados/cephadm/smoke-singlehost/{0-distro$/{ubuntu_20.04} 1-start 2-services/rgw 3-final} | 1 | |
Failure Reason:
Command failed on smithi007 with status 127: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:7546c41ab524b652a8ef9ff4bc8783b116a2b3fb shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 8b1c0fb8-fab7-11eb-8c24-001a4aab830c -- ceph-volume lvm zap /dev/vg_nvme/lv_4' |
||||||||||||||
fail | 6334707 | 2021-08-11 14:18:46 | 2021-08-11 15:07:42 | 2021-08-11 15:42:39 | 0:34:57 | 0:22:52 | 0:12:05 | smithi | master | centos | 8.2 | rados/cephadm/thrash/{0-distro/centos_8.2_kubic_stable 1-start 2-thrash 3-tasks/snaps-few-objects fixed-2 msgr/async-v1only root} | 2 | |
Failure Reason:
Command failed on smithi138 with status 5: 'sudo systemctl stop ceph-b34adbe4-fab8-11eb-8c24-001a4aab830c@mon.b' |
||||||||||||||
dead | 6334708 | 2021-08-11 14:18:47 | 2021-08-11 15:09:42 | 2021-08-12 03:18:33 | 12:08:51 | smithi | master | ubuntu | 20.04 | rados/cephadm/upgrade/{1-start-distro/1-start-ubuntu_20.04 2-repo_digest/defaut 3-start-upgrade 4-wait mon_election/classic} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 6334709 | 2021-08-11 14:18:48 | 2021-08-11 15:10:02 | 2021-08-11 15:29:07 | 0:19:05 | 0:10:35 | 0:08:30 | smithi | master | ubuntu | 20.04 | rados/perf/{ceph mon_election/classic objectstore/bluestore-low-osd-mem-target openstack scheduler/dmclock_1Shard_16Threads settings/optimized ubuntu_latest workloads/radosbench_4M_write} | 1 | |
fail | 6334710 | 2021-08-11 14:18:49 | 2021-08-11 15:10:03 | 2021-08-11 15:41:37 | 0:31:34 | 0:21:41 | 0:09:53 | smithi | master | centos | 8.3 | rados/cephadm/smoke-roleless/{0-distro/centos_8.3_kubic_stable 1-start 2-services/rgw-ingress 3-final} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
fail | 6334711 | 2021-08-11 14:18:50 | 2021-08-11 15:10:03 | 2021-08-11 15:42:31 | 0:32:28 | 0:25:16 | 0:07:12 | smithi | master | rhel | 8.3 | rados/cephadm/smoke-roleless/{0-distro/rhel_8.3_kubic_stable 1-start 2-services/rgw 3-final} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
fail | 6334712 | 2021-08-11 14:18:51 | 2021-08-11 15:10:53 | 2021-08-11 15:35:59 | 0:25:06 | 0:16:27 | 0:08:39 | smithi | master | centos | 8.2 | rados/cephadm/workunits/{0-distro/centos_8.2_kubic_stable mon_election/classic task/test_cephadm} | 1 | |
Failure Reason:
Command failed (workunit test cephadm/test_cephadm.sh) on smithi169 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=7546c41ab524b652a8ef9ff4bc8783b116a2b3fb TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_cephadm.sh' |
||||||||||||||
dead | 6334713 | 2021-08-11 14:18:52 | 2021-08-11 15:10:54 | 2021-08-12 03:19:15 | 12:08:21 | smithi | master | centos | 8.2 | rados/cephadm/mgr-nfs-upgrade/{0-centos_8.2_kubic_stable 1-bootstrap/octopus 1-start 2-nfs 3-upgrade-with-workload 4-final} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 6334714 | 2021-08-11 14:18:53 | 2021-08-11 15:10:54 | 2021-08-11 15:43:58 | 0:33:04 | 0:24:28 | 0:08:36 | smithi | master | rhel | 8.3 | rados/cephadm/smoke/{distro/rhel_8.3_kubic_stable fixed-2 mon_election/classic start} | 2 | |
Failure Reason:
Command failed on smithi185 with status 5: 'sudo systemctl stop ceph-d5a96aca-fab8-11eb-8c24-001a4aab830c@mon.b' |
||||||||||||||
fail | 6334715 | 2021-08-11 14:18:54 | 2021-08-11 15:12:45 | 2021-08-11 15:46:20 | 0:33:35 | 0:22:31 | 0:11:04 | smithi | master | ubuntu | 20.04 | rados/cephadm/smoke-roleless/{0-distro/ubuntu_20.04 1-start 2-services/basic 3-final} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
fail | 6334716 | 2021-08-11 14:18:55 | 2021-08-11 15:12:45 | 2021-08-11 15:34:14 | 0:21:29 | 0:11:00 | 0:10:29 | smithi | master | ubuntu | 20.04 | rados/perf/{ceph mon_election/connectivity objectstore/bluestore-stupid openstack scheduler/dmclock_default_shards settings/optimized ubuntu_latest workloads/radosbench_omap_write} | 1 | |
fail | 6334717 | 2021-08-11 14:18:57 | 2021-08-11 15:13:05 | 2021-08-11 15:48:49 | 0:35:44 | 0:23:50 | 0:11:54 | smithi | master | centos | 8.2 | rados/cephadm/thrash/{0-distro/centos_8.2_kubic_stable 1-start 2-thrash 3-tasks/rados_api_tests fixed-2 msgr/async-v2only root} | 2 | |
Failure Reason:
Command failed on smithi137 with status 5: 'sudo systemctl stop ceph-3bf74ba8-fab9-11eb-8c24-001a4aab830c@mon.b' |
||||||||||||||
fail | 6334718 | 2021-08-11 14:18:57 | 2021-08-11 15:13:46 | 2021-08-11 15:44:53 | 0:31:07 | 0:21:04 | 0:10:03 | smithi | master | centos | 8.3 | rados/cephadm/smoke-roleless/{0-distro/centos_8.3_kubic_stable 1-start 2-services/client-keyring 3-final} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
fail | 6334719 | 2021-08-11 14:18:59 | 2021-08-11 15:13:56 | 2021-08-11 15:47:28 | 0:33:32 | 0:22:58 | 0:10:34 | smithi | master | centos | 8.3 | rados/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/nautilus-v2only backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{centos_latest} mon_election/classic msgr-failures/fastclose rados thrashers/mapgap thrashosds-health workloads/rbd_cls} | 3 | |
Failure Reason:
Command failed on smithi078 with status 5: 'sudo systemctl stop ceph-355c5252-fab9-11eb-8c24-001a4aab830c@mon.b' |
||||||||||||||
fail | 6334720 | 2021-08-11 14:19:00 | 2021-08-11 15:14:27 | 2021-08-11 15:48:34 | 0:34:07 | 0:23:55 | 0:10:12 | smithi | master | centos | 8.3 | rados/cephadm/with-work/{0-distro/centos_8.3_kubic_stable fixed-2 mode/root mon_election/classic msgr/async-v1only start tasks/rados_python} | 2 | |
Failure Reason:
Command failed on smithi198 with status 5: 'sudo systemctl stop ceph-727e6a44-fab9-11eb-8c24-001a4aab830c@mon.b' |
||||||||||||||
dead | 6334721 | 2021-08-11 14:19:01 | 2021-08-11 15:15:47 | 2021-08-12 03:24:34 | 12:08:47 | smithi | master | centos | 8.3 | rados/cephadm/upgrade/{1-start-distro/1-start-centos_8.3-octopus 2-repo_digest/repo_digest 3-start-upgrade 4-wait mon_election/connectivity} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 6334722 | 2021-08-11 14:19:02 | 2021-08-11 15:16:07 | 2021-08-11 15:40:23 | 0:24:16 | 0:12:17 | 0:11:59 | smithi | master | centos | 8.3 | rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/classic msgr-failures/fastclose objectstore/bluestore-comp-zstd rados recovery-overrides/{more-async-recovery} supported-random-distro$/{centos_8} thrashers/fastread thrashosds-health workloads/ec-rados-plugin=lrc-k=4-m=2-l=3} | 3 | |
fail | 6334723 | 2021-08-11 14:19:03 | 2021-08-11 15:17:38 | 2021-08-11 15:50:51 | 0:33:13 | 0:26:22 | 0:06:51 | smithi | master | rhel | 8.3 | rados/cephadm/smoke-roleless/{0-distro/rhel_8.3_kubic_stable 1-start 2-services/iscsi 3-final} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
pass | 6334724 | 2021-08-11 14:19:04 | 2021-08-11 15:17:48 | 2021-08-11 15:35:21 | 0:17:33 | 0:08:25 | 0:09:08 | smithi | master | ubuntu | 20.04 | rados/singleton/{all/divergent_priors2 mon_election/classic msgr-failures/none msgr/async-v2only objectstore/bluestore-hybrid rados supported-random-distro$/{ubuntu_latest}} | 1 | |
pass | 6334725 | 2021-08-11 14:19:05 | 2021-08-11 15:17:48 | 2021-08-11 16:15:22 | 0:57:34 | 0:51:47 | 0:05:47 | smithi | master | rhel | 8.4 | rados/singleton-nomsgr/{all/recovery-unfound-found mon_election/connectivity rados supported-random-distro$/{rhel_8}} | 1 | |
fail | 6334726 | 2021-08-11 14:19:06 | 2021-08-11 15:18:39 | 2021-08-11 15:53:50 | 0:35:11 | 0:22:22 | 0:12:49 | smithi | master | ubuntu | 20.04 | rados/cephadm/smoke/{distro/ubuntu_20.04 fixed-2 mon_election/connectivity start} | 2 | |
Failure Reason:
Command failed on smithi097 with status 5: 'sudo systemctl stop ceph-82f0a22a-fab9-11eb-8c24-001a4aab830c@mon.b' |
||||||||||||||
fail | 6334728 | 2021-08-11 14:19:07 | 2021-08-11 15:18:39 | 2021-08-11 15:38:05 | 0:19:26 | 0:10:47 | 0:08:39 | smithi | master | ubuntu | 20.04 | rados/perf/{ceph mon_election/classic objectstore/bluestore-basic-min-osd-mem-target openstack scheduler/wpq_default_shards settings/optimized ubuntu_latest workloads/sample_fio} | 1 | |
fail | 6334730 | 2021-08-11 14:19:08 | 2021-08-11 15:19:19 | 2021-08-11 15:52:52 | 0:33:33 | 0:23:03 | 0:10:30 | smithi | master | centos | 8.2 | rados/cephadm/thrash/{0-distro/centos_8.2_kubic_stable 1-start 2-thrash 3-tasks/radosbench fixed-2 msgr/async root} | 2 | |
Failure Reason:
Command failed on smithi200 with status 5: 'sudo systemctl stop ceph-088ce092-faba-11eb-8c24-001a4aab830c@mon.b' |
||||||||||||||
fail | 6334732 | 2021-08-11 14:19:09 | 2021-08-11 15:19:50 | 2021-08-11 15:55:19 | 0:35:29 | 0:23:58 | 0:11:31 | smithi | master | ubuntu | 20.04 | rados/cephadm/smoke-roleless/{0-distro/ubuntu_20.04 1-start 2-services/mirror 3-final} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
fail | 6334735 | 2021-08-11 14:19:10 | 2021-08-11 15:19:50 | 2021-08-11 15:39:32 | 0:19:42 | 0:10:59 | 0:08:43 | smithi | master | centos | 8.2 | rados/cephadm/workunits/{0-distro/centos_8.2_kubic_stable mon_election/classic task/test_orch_cli} | 1 | |
Failure Reason:
Command failed on smithi087 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:7546c41ab524b652a8ef9ff4bc8783b116a2b3fb shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f45d960c-fab9-11eb-8c24-001a4aab830c -- ceph-volume lvm zap /dev/vg_nvme/lv_4' |
||||||||||||||
fail | 6334736 | 2021-08-11 14:19:11 | 2021-08-11 15:20:50 | 2021-08-11 15:51:52 | 0:31:02 | 0:21:28 | 0:09:34 | smithi | master | centos | 8.3 | rados/cephadm/smoke-roleless/{0-distro/centos_8.3_kubic_stable 1-start 2-services/nfs-ingress-rgw 3-final} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
fail | 6334739 | 2021-08-11 14:19:12 | 2021-08-11 15:21:11 | 2021-08-11 15:54:35 | 0:33:24 | 0:26:02 | 0:07:22 | smithi | master | rhel | 8.3 | rados/cephadm/smoke-roleless/{0-distro/rhel_8.3_kubic_stable 1-start 2-services/nfs-ingress 3-final} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
fail | 6334741 | 2021-08-11 14:19:13 | 2021-08-11 15:21:41 | 2021-08-11 15:41:35 | 0:19:54 | 0:07:59 | 0:11:55 | smithi | master | ubuntu | 20.04 | rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 1-rook 2-workload/radosbench 3-final cluster/3-node k8s/1.21 net/calico rook/1.6.2} | 3 | |
Failure Reason:
Command failed on smithi027 with status 1: 'sudo kubeadm init --node-name smithi027 --token abcdef.pvvm1exfp2ukda0z --pod-network-cidr 10.248.208.0/21' |
||||||||||||||
dead | 6334743 | 2021-08-11 14:19:15 | 2021-08-11 15:22:21 | 2021-08-12 03:33:02 | 12:10:41 | smithi | master | centos | 8.3 | rados/upgrade/parallel/{0-distro$/{centos_8.3_kubic_stable} 0-start 1-tasks mon_election/connectivity upgrade-sequence workload/{ec-rados-default rados_api rados_loadgenbig rbd_import_export test_rbd_api test_rbd_python}} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 6334745 | 2021-08-11 14:19:16 | 2021-08-11 15:24:42 | 2021-08-11 16:05:35 | 0:40:53 | 0:34:19 | 0:06:34 | smithi | master | rhel | 8.3 | rados/cephadm/with-work/{0-distro/rhel_8.3_kubic_stable fixed-2 mode/packaged mon_election/connectivity msgr/async-v2only start tasks/rados_api_tests} | 2 | |
Failure Reason:
Command failed on smithi101 with status 5: 'sudo systemctl stop ceph-bee846dc-fabb-11eb-8c24-001a4aab830c@mon.b' |
||||||||||||||
fail | 6334747 | 2021-08-11 14:19:17 | 2021-08-11 15:24:52 | 2021-08-11 20:24:28 | 4:59:36 | 4:49:21 | 0:10:15 | smithi | master | centos | 8.3 | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-comp-zstd rados tasks/rados_cls_all validater/valgrind} | 2 | |
Failure Reason:
Command failed (workunit test cls/test_cls_rgw.sh) on smithi145 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=7546c41ab524b652a8ef9ff4bc8783b116a2b3fb TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_rgw.sh' |
||||||||||||||
pass | 6334749 | 2021-08-11 14:19:18 | 2021-08-11 15:25:02 | 2021-08-11 15:55:43 | 0:30:41 | 0:22:21 | 0:08:20 | smithi | master | rhel | 8.4 | rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/classic msgr-failures/osd-dispatch-delay objectstore/bluestore-comp-zstd rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{rhel_8} thrashers/careful thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} | 4 | |
dead | 6334751 | 2021-08-11 14:19:19 | 2021-08-11 15:26:13 | 2021-08-12 03:35:20 | 12:09:07 | smithi | master | centos | 8.2 | rados/cephadm/mgr-nfs-upgrade/{0-centos_8.2_kubic_stable 1-bootstrap/16.2.4 1-start 2-nfs 3-upgrade-with-workload 4-final} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 6334753 | 2021-08-11 14:19:20 | 2021-08-11 15:26:53 | 2021-08-11 15:45:45 | 0:18:52 | 0:10:38 | 0:08:14 | smithi | master | ubuntu | 20.04 | rados/perf/{ceph mon_election/connectivity objectstore/bluestore-bitmap openstack scheduler/dmclock_1Shard_16Threads settings/optimized ubuntu_latest workloads/sample_radosbench} | 1 | |
pass | 6334755 | 2021-08-11 14:19:21 | 2021-08-11 15:27:03 | 2021-08-11 15:58:10 | 0:31:07 | 0:23:42 | 0:07:25 | smithi | master | rhel | 8.4 | rados/mgr/{clusters/{2-node-mgr} debug/mgr mon_election/connectivity objectstore/bluestore-comp-lz4 supported-random-distro$/{rhel_8} tasks/insights} | 2 | |
fail | 6334757 | 2021-08-11 14:19:22 | 2021-08-11 15:27:14 | 2021-08-11 15:56:30 | 0:29:16 | 0:21:46 | 0:07:30 | smithi | master | rhel | 8.3 | rados/cephadm/orchestrator_cli/{0-random-distro$/{rhel_8.3_kubic_stable} 2-node-mgr orchestrator_cli} | 2 | |
Failure Reason:
Test failure: test_device_ls (tasks.mgr.test_orchestrator_cli.TestOrchestratorCli) |
||||||||||||||
dead | 6334759 | 2021-08-11 14:19:23 | 2021-08-11 15:27:34 | 2021-08-12 03:37:43 | 12:10:09 | smithi | master | centos | 8.3 | rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/connectivity msgr-failures/few objectstore/bluestore-hybrid rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{centos_8} thrashers/mapgap thrashosds-health workloads/ec-rados-plugin=jerasure-k=4-m=2} | 3 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 6334761 | 2021-08-11 14:19:24 | 2021-08-11 15:29:05 | 2021-08-11 16:00:11 | 0:31:06 | 0:20:31 | 0:10:35 | smithi | master | centos | 8.3 | rados/cephadm/smoke/{distro/centos_8.3_kubic_stable fixed-2 mon_election/classic start} | 2 | |
Failure Reason:
Command failed on smithi196 with status 5: 'sudo systemctl stop ceph-ff60cd34-faba-11eb-8c24-001a4aab830c@mon.b' |
||||||||||||||
fail | 6334763 | 2021-08-11 14:19:25 | 2021-08-11 15:29:16 | 2021-08-11 16:04:17 | 0:35:01 | 0:24:37 | 0:10:24 | smithi | master | centos | 8.3 | rados/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/nautilus backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{centos_latest} mon_election/connectivity msgr-failures/few rados thrashers/morepggrow thrashosds-health workloads/snaps-few-objects} | 3 | |
Failure Reason:
Command failed on smithi110 with status 5: 'sudo systemctl stop ceph-6c5f5612-fabb-11eb-8c24-001a4aab830c@mon.b' |
||||||||||||||
fail | 6334765 | 2021-08-11 14:19:26 | 2021-08-11 15:29:16 | 2021-08-11 15:47:53 | 0:18:37 | 0:07:36 | 0:11:01 | smithi | master | ubuntu | 20.04 | rados/cephadm/smoke-singlehost/{0-distro$/{ubuntu_20.04} 1-start 2-services/basic 3-final} | 1 | |
Failure Reason:
Command failed on smithi192 with status 127: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:7546c41ab524b652a8ef9ff4bc8783b116a2b3fb shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid c3f28ed6-faba-11eb-8c24-001a4aab830c -- ceph-volume lvm zap /dev/vg_nvme/lv_4' |
||||||||||||||
fail | 6334766 | 2021-08-11 14:19:27 | 2021-08-11 15:30:46 | 2021-08-11 16:07:28 | 0:36:42 | 0:24:09 | 0:12:33 | smithi | master | centos | 8.2 | rados/cephadm/thrash/{0-distro/centos_8.2_kubic_stable 1-start 2-thrash 3-tasks/small-objects fixed-2 msgr/async-v1only root} | 2 | |
Failure Reason:
Command failed on smithi155 with status 5: 'sudo systemctl stop ceph-d460f522-fabb-11eb-8c24-001a4aab830c@mon.b' |
||||||||||||||
dead | 6334768 | 2021-08-11 14:19:28 | 2021-08-11 15:32:17 | 2021-08-12 03:41:31 | 12:09:14 | smithi | master | ubuntu | 20.04 | rados/cephadm/upgrade/{1-start-distro/1-start-ubuntu_20.04-15.2.9 2-repo_digest/defaut 3-start-upgrade 4-wait mon_election/classic} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 6334770 | 2021-08-11 14:19:29 | 2021-08-11 15:32:17 | 2021-08-11 16:11:37 | 0:39:20 | 0:29:25 | 0:09:55 | smithi | master | centos | 8.stream | rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/few objectstore/bluestore-hybrid rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{centos_8.stream} thrashers/morepggrow thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} | 2 | |
pass | 6334772 | 2021-08-11 14:19:30 | 2021-08-11 15:32:57 | 2021-08-11 16:45:55 | 1:12:58 | 1:03:24 | 0:09:34 | smithi | master | centos | 8.stream | rados/singleton/{all/lost-unfound mon_election/connectivity msgr-failures/many msgr/async-v1only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{centos_8.stream}} | 1 | |
fail | 6334774 | 2021-08-11 14:19:31 | 2021-08-11 15:32:58 | 2021-08-11 16:06:43 | 0:33:45 | 0:22:25 | 0:11:20 | smithi | master | ubuntu | 20.04 | rados/cephadm/smoke-roleless/{0-distro/ubuntu_20.04 1-start 2-services/nfs-ingress2 3-final} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
fail | 6334776 | 2021-08-11 14:19:32 | 2021-08-11 15:33:18 | 2021-08-11 15:54:55 | 0:21:37 | 0:11:05 | 0:10:32 | smithi | master | ubuntu | 20.04 | rados/perf/{ceph mon_election/classic objectstore/bluestore-comp openstack scheduler/dmclock_default_shards settings/optimized ubuntu_latest workloads/fio_4K_rand_read} | 1 | |
pass | 6334777 | 2021-08-11 14:19:33 | 2021-08-11 15:33:48 | 2021-08-11 15:51:04 | 0:17:16 | 0:07:43 | 0:09:33 | smithi | master | ubuntu | 20.04 | rados/singleton-nomsgr/{all/ceph-kvstore-tool mon_election/classic rados supported-random-distro$/{ubuntu_latest}} | 1 | |
pass | 6334779 | 2021-08-11 14:19:34 | 2021-08-11 15:33:48 | 2021-08-11 16:03:44 | 0:29:56 | 0:22:57 | 0:06:59 | smithi | master | rhel | 8.4 | rados/multimon/{clusters/6 mon_election/connectivity msgr-failures/many msgr/async no_pools objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{rhel_8} tasks/mon_recovery} | 2 | |
fail | 6334781 | 2021-08-11 14:19:35 | 2021-08-11 15:34:59 | 2021-08-11 16:06:28 | 0:31:29 | 0:21:18 | 0:10:11 | smithi | master | centos | 8.3 | rados/cephadm/smoke-roleless/{0-distro/centos_8.3_kubic_stable 1-start 2-services/nfs 3-final} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
fail | 6334783 | 2021-08-11 14:19:36 | 2021-08-11 15:35:09 | 2021-08-11 16:02:03 | 0:26:54 | 0:16:52 | 0:10:02 | smithi | master | centos | 8.2 | rados/cephadm/workunits/{0-distro/centos_8.2_kubic_stable mon_election/connectivity task/test_cephadm} | 1 | |
Failure Reason:
Command failed (workunit test cephadm/test_cephadm.sh) on smithi057 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=7546c41ab524b652a8ef9ff4bc8783b116a2b3fb TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_cephadm.sh' |
||||||||||||||
fail | 6334785 | 2021-08-11 14:19:37 | 2021-08-11 15:35:09 | 2021-08-11 16:09:19 | 0:34:10 | 0:25:33 | 0:08:37 | smithi | master | rhel | 8.3 | rados/cephadm/smoke-roleless/{0-distro/rhel_8.3_kubic_stable 1-start 2-services/nfs2 3-final} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
pass | 6334787 | 2021-08-11 14:19:38 | 2021-08-11 15:36:10 | 2021-08-11 16:12:21 | 0:36:11 | 0:22:43 | 0:13:28 | smithi | master | ubuntu | 20.04 | rados/thrash-erasure-code-overwrites/{bluestore-bitmap ceph clusters/{fixed-2 openstack} fast/fast mon_election/classic msgr-failures/few rados recovery-overrides/{more-active-recovery} supported-random-distro$/{ubuntu_latest} thrashers/fastread thrashosds-health workloads/ec-small-objects-overwrites} | 2 | |
fail | 6334789 | 2021-08-11 14:19:39 | 2021-08-11 15:39:00 | 2021-08-11 16:10:06 | 0:31:06 | 0:24:18 | 0:06:48 | smithi | master | rhel | 8.3 | rados/cephadm/smoke/{distro/rhel_8.3_kubic_stable fixed-2 mon_election/connectivity start} | 2 | |
Failure Reason:
Command failed on smithi170 with status 5: 'sudo systemctl stop ceph-7df69556-fabc-11eb-8c24-001a4aab830c@mon.b' |
||||||||||||||
fail | 6334791 | 2021-08-11 14:19:40 | 2021-08-11 15:39:00 | 2021-08-11 16:12:32 | 0:33:32 | 0:23:11 | 0:10:21 | smithi | master | centos | 8.2 | rados/cephadm/thrash/{0-distro/centos_8.2_kubic_stable 1-start 2-thrash 3-tasks/snaps-few-objects fixed-2 msgr/async-v2only root} | 2 | |
Failure Reason:
Command failed on smithi171 with status 5: 'sudo systemctl stop ceph-dae882ba-fabc-11eb-8c24-001a4aab830c@mon.b' |
||||||||||||||
fail | 6334793 | 2021-08-11 14:19:41 | 2021-08-11 15:39:31 | 2021-08-11 16:12:48 | 0:33:17 | 0:22:37 | 0:10:40 | smithi | master | ubuntu | 20.04 | rados/cephadm/smoke-roleless/{0-distro/ubuntu_20.04 1-start 2-services/rgw-ingress 3-final} | 2 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
fail | 6334795 | 2021-08-11 14:19:42 | 2021-08-11 15:39:41 | 2021-08-11 16:15:20 | 0:35:39 | 0:24:08 | 0:11:31 | smithi | master | ubuntu | 20.04 | rados/cephadm/with-work/{0-distro/ubuntu_20.04 fixed-2 mode/root mon_election/classic msgr/async start tasks/rados_python} | 2 | |
Failure Reason:
Command failed on smithi119 with status 5: 'sudo systemctl stop ceph-9081db86-fabc-11eb-8c24-001a4aab830c@mon.b' |
||||||||||||||
fail | 6334797 | 2021-08-11 14:19:43 | 2021-08-11 15:40:01 | 2021-08-11 15:59:53 | 0:19:52 | 0:10:36 | 0:09:16 | smithi | master | ubuntu | 20.04 | rados/perf/{ceph mon_election/connectivity objectstore/bluestore-low-osd-mem-target openstack scheduler/wpq_default_shards settings/optimized ubuntu_latest workloads/fio_4K_rand_rw} | 1 | |
dead | 6334799 | 2021-08-11 14:19:44 | 2021-08-11 15:40:32 | 2021-08-12 03:50:02 | 12:09:30 | smithi | master | ubuntu | 20.04 | rados/cephadm/upgrade/{1-start-distro/1-start-ubuntu_20.04 2-repo_digest/repo_digest 3-start-upgrade 4-wait mon_election/connectivity} | 2 | |||
Failure Reason:
hit max job timeout |