Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
pass 6716192 2022-03-02 15:59:11 2022-03-02 16:51:39 2022-03-02 17:19:37 0:27:58 0:19:55 0:08:03 smithi master rhel 8.4 rados/cephadm/smoke/{0-nvme-loop distro/rhel_8.4_container_tools_rhel8 fixed-2 mon_election/connectivity start} 2
pass 6716193 2022-03-02 15:59:12 2022-03-02 16:52:50 2022-03-02 17:44:45 0:51:55 0:41:08 0:10:47 smithi master centos 8.stream rados/monthrash/{ceph clusters/3-mons mon_election/classic msgr-failures/mon-delay msgr/async-v1only objectstore/bluestore-stupid rados supported-random-distro$/{centos_8} thrashers/sync workloads/rados_mon_osdmap_prune} 2
pass 6716194 2022-03-02 15:59:13 2022-03-02 16:52:50 2022-03-02 17:29:00 0:36:10 0:25:39 0:10:31 smithi master centos 8.stream rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/many msgr/async-v1only objectstore/bluestore-comp-zlib rados supported-random-distro$/{centos_8} tasks/rados_api_tests} 2
fail 6716195 2022-03-02 15:59:14 2022-03-02 16:53:50 2022-03-02 17:09:29 0:15:39 0:05:40 0:09:59 smithi master ubuntu 20.04 rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 1-rook 2-workload/radosbench 3-final cluster/1-node k8s/1.21 net/calico rook/master} 1
Failure Reason:

[Errno 2] Cannot find file on the remote 'ubuntu@smithi062.front.sepia.ceph.com': 'rook/cluster/examples/kubernetes/ceph/operator.yaml'

pass 6716196 2022-03-02 15:59:15 2022-03-02 16:54:11 2022-03-02 20:07:28 3:13:17 2:59:55 0:13:22 smithi master ubuntu 18.04 rados/upgrade/nautilus-x-singleton/{0-cluster/{openstack start} 1-install/nautilus 2-partial-upgrade/firsthalf 3-thrash/default 4-workload/{rbd-cls rbd-import-export readwrite snaps-few-objects} 5-workload/{radosbench rbd_api} 6-finish-upgrade 7-pacific 8-workload/{rbd-python snaps-many-objects} mon_election/classic thrashosds-health ubuntu_18.04} 4
pass 6716197 2022-03-02 15:59:16 2022-03-02 16:57:51 2022-03-02 17:35:01 0:37:10 0:26:48 0:10:22 smithi master centos 8.stream rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.stream_container_tools 1-bootstrap/16.2.4 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
pass 6716198 2022-03-02 15:59:17 2022-03-02 16:57:52 2022-03-02 17:21:31 0:23:39 0:14:17 0:09:22 smithi master centos 8.stream rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/many msgr/async objectstore/bluestore-hybrid rados supported-random-distro$/{centos_8} tasks/rados_python} 2
pass 6716199 2022-03-02 15:59:18 2022-03-02 16:58:32 2022-03-02 17:17:35 0:19:03 0:09:53 0:09:10 smithi master centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/classic task/test_adoption} 1
pass 6716200 2022-03-02 15:59:19 2022-03-02 16:58:43 2022-03-02 17:37:09 0:38:26 0:27:03 0:11:23 smithi master ubuntu 18.04 rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/nautilus backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{ubuntu_18.04} mon_election/classic msgr-failures/osd-delay rados thrashers/morepggrow thrashosds-health workloads/test_rbd_api} 3
pass 6716201 2022-03-02 15:59:20 2022-03-02 16:58:53 2022-03-02 18:29:52 1:30:59 1:20:13 0:10:46 smithi master ubuntu 18.04 rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/luminous-v1only backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{ubuntu_18.04} mon_election/classic msgr-failures/few rados thrashers/careful thrashosds-health workloads/radosbench} 3
pass 6716202 2022-03-02 15:59:22 2022-03-02 17:00:03 2022-03-02 17:37:42 0:37:39 0:28:35 0:09:04 smithi master centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{pg-warn whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-inline/no 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
pass 6716203 2022-03-02 15:59:23 2022-03-02 17:00:24 2022-03-02 17:28:26 0:28:02 0:17:10 0:10:52 smithi master ubuntu 18.04 rados/cephadm/smoke/{0-nvme-loop distro/ubuntu_18.04 fixed-2 mon_election/connectivity start} 2
pass 6716204 2022-03-02 15:59:24 2022-03-02 17:02:54 2022-03-02 17:34:24 0:31:30 0:22:43 0:08:47 smithi master centos 8.stream rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.stream_container_tools 1-bootstrap/16.2.5 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
pass 6716205 2022-03-02 15:59:25 2022-03-02 17:03:15 2022-03-02 17:40:55 0:37:40 0:29:08 0:08:32 smithi master ubuntu 18.04 rados/rook/smoke/{0-distro/ubuntu_18.04 0-kubeadm 1-rook 2-workload/radosbench 3-final cluster/1-node k8s/1.21 net/calico rook/1.6.2} 1
pass 6716206 2022-03-02 15:59:26 2022-03-02 17:03:45 2022-03-02 17:57:24 0:53:39 0:41:52 0:11:47 smithi master ubuntu 18.04 rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/mimic-v1only backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{ubuntu_18.04} mon_election/classic msgr-failures/fastclose rados thrashers/mapgap thrashosds-health workloads/snaps-few-objects} 3
pass 6716207 2022-03-02 15:59:27 2022-03-02 17:05:56 2022-03-02 17:29:13 0:23:17 0:17:28 0:05:49 smithi master rhel 8.4 rados/cephadm/smoke-singlehost/{0-distro$/{rhel_8.4_container_tools_3.0} 1-start 2-services/rgw 3-final} 1
pass 6716208 2022-03-02 15:59:28 2022-03-02 17:05:56 2022-03-02 17:32:49 0:26:53 0:13:44 0:13:09 smithi master ubuntu 18.04 rados/cephadm/smoke-roleless/{0-distro/ubuntu_18.04 0-nvme-loop 1-start 2-services/iscsi 3-final} 2
fail 6716209 2022-03-02 15:59:29 2022-03-02 17:08:57 2022-03-02 17:33:05 0:24:08 0:13:18 0:10:50 smithi master ubuntu 18.04 rados/cephadm/osds/{0-distro/ubuntu_18.04 0-nvme-loop 1-start 2-ops/rm-zap-add} 2
Failure Reason:

Command failed on smithi062 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:09205f6ae7d9b26d5c244b8a6aa25a82ce74e022 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 2c4da7cc-9a4d-11ec-8c35-001a4aab830c -- bash -c \'set -e\nset -x\nceph orch ps\nceph orch device ls\nDEVID=$(ceph device ls | grep osd.1 | awk \'"\'"\'{print $1}\'"\'"\')\nHOST=$(ceph orch device ls | grep $DEVID | awk \'"\'"\'{print $1}\'"\'"\')\nDEV=$(ceph orch device ls | grep $DEVID | awk \'"\'"\'{print $2}\'"\'"\')\necho "host $HOST, dev $DEV, devid $DEVID"\nceph orch osd rm 1\nwhile ceph orch osd rm status | grep ^1 ; do sleep 5 ; done\nceph orch device zap $HOST $DEV --force\nceph orch daemon add osd $HOST:$DEV\nwhile ! ceph osd dump | grep osd.1 | grep up ; do sleep 5 ; done\n\''

pass 6716210 2022-03-02 15:59:30 2022-03-02 17:09:37 2022-03-02 17:49:20 0:39:43 0:28:55 0:10:48 smithi master centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{pg-warn whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-inline/yes 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
pass 6716211 2022-03-02 15:59:31 2022-03-02 17:10:28 2022-03-02 17:50:45 0:40:17 0:29:56 0:10:21 smithi master ubuntu 18.04 rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/nautilus-v1only backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{ubuntu_18.04} mon_election/classic msgr-failures/osd-delay rados thrashers/none thrashosds-health workloads/cache-snaps} 3
dead 6716212 2022-03-02 15:59:32 2022-03-02 17:11:28 2022-03-02 23:53:03 6:41:35 smithi master centos 8.stream rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.stream_container_tools 1-bootstrap/octopus 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
Failure Reason:

hit max job timeout

pass 6716213 2022-03-02 15:59:33 2022-03-02 17:13:29 2022-03-02 17:38:48 0:25:19 0:12:57 0:12:22 smithi master centos 8.stream rados/cephadm/smoke-roleless/{0-distro/centos_8.stream_container_tools 0-nvme-loop 1-start 2-services/nfs-ingress-rgw-bucket 3-final} 2
pass 6716214 2022-03-02 15:59:34 2022-03-02 17:15:39 2022-03-02 17:58:43 0:43:04 0:33:55 0:09:09 smithi master centos 8.stream rados/cephadm/thrash/{0-distro/centos_8.stream_container_tools 1-start 2-thrash 3-tasks/rados_api_tests fixed-2 msgr/async-v2only root} 2
pass 6716215 2022-03-02 15:59:35 2022-03-02 17:15:40 2022-03-02 17:41:11 0:25:31 0:19:30 0:06:01 smithi master rhel 8.4 rados/cephadm/smoke/{0-nvme-loop distro/rhel_8.4_container_tools_rhel8 fixed-2 mon_election/connectivity start} 2
fail 6716216 2022-03-02 15:59:36 2022-03-02 17:16:20 2022-03-02 23:53:15 6:36:55 6:26:07 0:10:48 smithi master centos 8.stream rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-zlib rados tasks/rados_api_tests validater/lockdep} 2
Failure Reason:

Command failed (workunit test rados/test.sh) on smithi143 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=09205f6ae7d9b26d5c244b8a6aa25a82ce74e022 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 ALLOW_TIMEOUTS=1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test.sh'

pass 6716217 2022-03-02 15:59:37 2022-03-02 17:17:30 2022-03-02 19:14:08 1:56:38 1:46:45 0:09:53 smithi master ubuntu 18.04 rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/nautilus-v2only backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{ubuntu_18.04} mon_election/connectivity msgr-failures/fastclose rados thrashers/pggrow thrashosds-health workloads/radosbench} 3
dead 6716218 2022-03-02 15:59:38 2022-03-02 17:18:41 2022-03-02 17:39:29 0:20:48 smithi master ubuntu 18.04 rados/upgrade/nautilus-x-singleton/{0-cluster/{openstack start} 1-install/nautilus 2-partial-upgrade/firsthalf 3-thrash/default 4-workload/{rbd-cls rbd-import-export readwrite snaps-few-objects} 5-workload/{radosbench rbd_api} 6-finish-upgrade 7-pacific 8-workload/{rbd-python snaps-many-objects} mon_election/connectivity thrashosds-health ubuntu_18.04} 4
Failure Reason:

SSH connection to smithi089 was lost: 'uname -r'

dead 6716219 2022-03-02 15:59:40 2022-03-02 17:21:42 2022-03-03 00:02:52 6:41:10 smithi master centos 8.stream rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.stream_container_tools 1-bootstrap/16.2.4 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
Failure Reason:

hit max job timeout

fail 6716220 2022-03-02 15:59:41 2022-03-02 17:22:22 2022-03-02 17:39:39 0:17:17 0:06:04 0:11:13 smithi master ubuntu 20.04 rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 1-rook 2-workload/none 3-final cluster/3-node k8s/1.21 net/calico rook/master} 3
Failure Reason:

[Errno 2] Cannot find file on the remote 'ubuntu@smithi017.front.sepia.ceph.com': 'rook/cluster/examples/kubernetes/ceph/operator.yaml'

pass 6716221 2022-03-02 15:59:42 2022-03-02 17:23:43 2022-03-02 17:50:51 0:27:08 0:17:17 0:09:51 smithi master centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/classic task/test_adoption} 1
pass 6716222 2022-03-02 15:59:43 2022-03-02 17:23:43 2022-03-02 18:05:47 0:42:04 0:28:34 0:13:30 smithi master centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{pg-warn whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-inline/no 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
pass 6716223 2022-03-02 15:59:44 2022-03-02 17:28:34 2022-03-02 17:54:32 0:25:58 0:17:02 0:08:56 smithi master ubuntu 18.04 rados/cephadm/smoke/{0-nvme-loop distro/ubuntu_18.04 fixed-2 mon_election/connectivity start} 2
pass 6716224 2022-03-02 15:59:45 2022-03-02 17:29:04 2022-03-02 18:02:47 0:33:43 0:22:51 0:10:52 smithi master centos 8.stream rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.stream_container_tools 1-bootstrap/16.2.5 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
dead 6716225 2022-03-02 15:59:46 2022-03-02 17:29:15 2022-03-02 17:51:08 0:21:53 smithi master ubuntu 18.04 rados/cephadm/smoke-singlehost/{0-distro$/{ubuntu_18.04} 1-start 2-services/rgw 3-final} 1
Failure Reason:

SSH connection to smithi152 was lost: 'uname -r'

dead 6716226 2022-03-02 15:59:47 2022-03-02 17:32:55 2022-03-02 17:51:02 0:18:07 smithi master ubuntu 18.04 rados/cephadm/osds/{0-distro/ubuntu_18.04 0-nvme-loop 1-start 2-ops/rm-zap-flag} 2
Failure Reason:

SSH connection to smithi154 was lost: 'uname -r'

pass 6716227 2022-03-02 15:59:48 2022-03-02 17:33:16 2022-03-02 18:13:30 0:40:14 0:29:05 0:11:09 smithi master centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{pg-warn whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-inline/yes 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
dead 6716228 2022-03-02 15:59:49 2022-03-02 17:34:26 2022-03-03 00:15:14 6:40:48 smithi master centos 8.stream rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.stream_container_tools 1-bootstrap/octopus 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
Failure Reason:

hit max job timeout