Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
pass 6646480 2022-01-28 15:53:27 2022-01-28 15:54:12 2022-01-28 16:35:20 0:41:08 0:30:48 0:10:20 smithi master centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{pg-warn whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
dead 6646481 2022-01-28 15:53:28 2022-01-28 15:54:12 2022-01-28 15:55:17 0:01:05 smithi master centos 8.3 rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.3_container_tools_3.0 1-bootstrap/16.2.4 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
Failure Reason:

Error reimaging machines: Failed to power on smithi165

dead 6646482 2022-01-28 15:53:29 2022-01-28 15:54:13 2022-01-28 22:34:49 6:40:36 smithi master centos 8.3 rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.3_container_tools_3.0 1-bootstrap/octopus 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
Failure Reason:

hit max job timeout

pass 6646483 2022-01-28 15:53:30 2022-01-28 15:54:13 2022-01-28 16:22:15 0:28:02 0:17:09 0:10:53 smithi master ubuntu 18.04 rados/cephadm/smoke/{0-nvme-loop distro/ubuntu_18.04 fixed-2 mon_election/connectivity start} 2
dead 6646484 2022-01-28 15:53:31 2022-01-28 15:54:43 2022-01-28 22:35:04 6:40:21 smithi master centos 8.stream rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.stream_container_tools 1-bootstrap/16.2.4 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
Failure Reason:

hit max job timeout

dead 6646485 2022-01-28 15:53:32 2022-01-28 15:54:44 2022-01-28 22:34:54 6:40:10 smithi master centos 8.stream rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.stream_container_tools 1-bootstrap/octopus 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
Failure Reason:

hit max job timeout

fail 6646486 2022-01-28 15:53:33 2022-01-28 15:55:24 2022-01-28 16:22:05 0:26:41 0:13:11 0:13:30 smithi master ubuntu 18.04 rados/cephadm/osds/{0-distro/ubuntu_18.04 0-nvme-loop 1-start 2-ops/rm-zap-add} 2
Failure Reason:

Command failed on smithi064 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:656fd1b72f2e6d6132afd224e66dc1ee26b8d0c8 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid db117102-8054-11ec-8c35-001a4aab830c -- bash -c \'set -e\nset -x\nceph orch ps\nceph orch device ls\nDEVID=$(ceph device ls | grep osd.1 | awk \'"\'"\'{print $1}\'"\'"\')\nHOST=$(ceph orch device ls | grep $DEVID | awk \'"\'"\'{print $1}\'"\'"\')\nDEV=$(ceph orch device ls | grep $DEVID | awk \'"\'"\'{print $2}\'"\'"\')\necho "host $HOST, dev $DEV, devid $DEVID"\nceph orch osd rm 1\nwhile ceph orch osd rm status | grep ^1 ; do sleep 5 ; done\nceph orch device zap $HOST $DEV --force\nceph orch daemon add osd $HOST:$DEV\nwhile ! ceph osd dump | grep osd.1 | grep up ; do sleep 5 ; done\n\''

pass 6646487 2022-01-28 15:53:34 2022-01-28 15:58:15 2022-01-28 16:38:14 0:39:59 0:28:33 0:11:26 smithi master centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{pg-warn whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
dead 6646488 2022-01-28 15:53:35 2022-01-28 15:58:15 2022-01-28 22:39:49 6:41:34 smithi master centos 8.3 rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.3_container_tools_3.0 1-bootstrap/16.2.4 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
Failure Reason:

hit max job timeout

fail 6646489 2022-01-28 15:53:36 2022-01-28 15:59:26 2022-01-28 16:16:16 0:16:50 0:06:01 0:10:49 smithi master ubuntu 20.04 rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 1-rook 2-workload/none 3-final cluster/3-node k8s/1.21 net/calico rook/master} 3
Failure Reason:

[Errno 2] Cannot find file on the remote 'ubuntu@smithi112.front.sepia.ceph.com': 'rook/cluster/examples/kubernetes/ceph/operator.yaml'

pass 6646490 2022-01-28 15:53:37 2022-01-28 16:00:36 2022-01-28 16:49:31 0:48:55 0:37:41 0:11:14 smithi master ubuntu 18.04 rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/luminous backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{ubuntu_18.04} mon_election/connectivity msgr-failures/few rados thrashers/morepggrow thrashosds-health workloads/cache-snaps} 3
dead 6646491 2022-01-28 15:53:38 2022-01-28 16:00:57 2022-01-28 22:41:20 6:40:23 smithi master centos 8.3 rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.3_container_tools_3.0 1-bootstrap/octopus 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
Failure Reason:

hit max job timeout

pass 6646492 2022-01-28 15:53:40 2022-01-28 16:00:57 2022-01-28 16:27:08 0:26:11 0:16:54 0:09:17 smithi master ubuntu 18.04 rados/cephadm/smoke/{0-nvme-loop distro/ubuntu_18.04 fixed-2 mon_election/connectivity start} 2
pass 6646493 2022-01-28 15:53:41 2022-01-28 16:02:17 2022-01-28 16:27:49 0:25:32 0:16:38 0:08:54 smithi master ubuntu 18.04 rados/cephadm/smoke-roleless/{0-distro/ubuntu_18.04 0-nvme-loop 1-start 2-services/nfs-ingress2 3-final} 2
dead 6646494 2022-01-28 15:53:42 2022-01-28 16:02:28 2022-01-28 22:42:48 6:40:20 smithi master centos 8.stream rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.stream_container_tools 1-bootstrap/16.2.4 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
Failure Reason:

hit max job timeout

fail 6646495 2022-01-28 15:53:43 2022-01-28 16:02:28 2022-01-28 16:27:05 0:24:37 0:13:17 0:11:20 smithi master ubuntu 18.04 rados/cephadm/osds/{0-distro/ubuntu_18.04 0-nvme-loop 1-start 2-ops/rm-zap-wait} 2
Failure Reason:

Command failed on smithi102 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:656fd1b72f2e6d6132afd224e66dc1ee26b8d0c8 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 852439cc-8055-11ec-8c35-001a4aab830c -- bash -c \'set -e\nset -x\nceph orch ps\nceph orch device ls\nDEVID=$(ceph device ls | grep osd.1 | awk \'"\'"\'{print $1}\'"\'"\')\nHOST=$(ceph orch device ls | grep $DEVID | awk \'"\'"\'{print $1}\'"\'"\')\nDEV=$(ceph orch device ls | grep $DEVID | awk \'"\'"\'{print $2}\'"\'"\')\necho "host $HOST, dev $DEV, devid $DEVID"\nceph orch osd rm 1\nwhile ceph orch osd rm status | grep ^1 ; do sleep 5 ; done\nceph orch device zap $HOST $DEV --force\nwhile ! ceph osd dump | grep osd.1 | grep up ; do sleep 5 ; done\n\''

dead 6646496 2022-01-28 15:53:44 2022-01-28 16:03:39 2022-01-28 22:44:01 6:40:22 smithi master centos 8.stream rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.stream_container_tools 1-bootstrap/octopus 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
Failure Reason:

hit max job timeout