User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Pass | Fail | Dead |
---|---|---|---|---|---|---|---|---|---|---|---|
yuriw | 2022-01-25 22:16:37 | 2022-01-25 22:18:48 | 2022-01-26 00:28:59 | 2:10:11 | rados | wip-yuri4-testing-2022-01-24-1706-pacific | smithi | 5bfc361 | 3 | 4 | 8 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
fail | 6640210 | 2022-01-25 22:18:27 | 2022-01-25 22:18:48 | 2022-01-25 22:32:34 | 0:13:46 | 0:03:33 | 0:10:13 | smithi | master | ubuntu | 20.04 | rados/dashboard/{centos_8.2_container_tools_3.0 clusters/{2-node-mgr} debug/mgr mon_election/classic random-objectstore$/{bluestore-bitmap} supported-random-distro$/{ubuntu_latest} tasks/dashboard} | 2 | |
Failure Reason:
Command failed on smithi137 with status 1: 'TESTDIR=/home/ubuntu/cephtest bash -s' |
||||||||||||||
dead | 6640211 | 2022-01-25 22:18:28 | 2022-01-25 22:18:58 | 2022-01-26 00:28:59 | 2:10:01 | smithi | master | centos | 8.3 | rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.3_container_tools_3.0 1-bootstrap/16.2.4 1-start 2-nfs 3-upgrade-with-workload 4-final} | 2 | |||
pass | 6640212 | 2022-01-25 22:18:29 | 2022-01-25 22:19:18 | 2022-01-25 22:57:43 | 0:38:25 | 0:26:20 | 0:12:05 | smithi | master | ubuntu | 18.04 | rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/nautilus backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{ubuntu_18.04} mon_election/classic msgr-failures/osd-delay rados thrashers/morepggrow thrashosds-health workloads/test_rbd_api} | 3 | |
dead | 6640213 | 2022-01-25 22:18:30 | 2022-01-25 22:21:39 | 2022-01-26 00:28:26 | 2:06:47 | smithi | master | centos | 8.3 | rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.3_container_tools_3.0 1-bootstrap/octopus 1-start 2-nfs 3-upgrade-with-workload 4-final} | 2 | |||
pass | 6640214 | 2022-01-25 22:18:31 | 2022-01-25 22:23:19 | 2022-01-25 22:53:03 | 0:29:44 | 0:17:02 | 0:12:42 | smithi | master | ubuntu | 18.04 | rados/cephadm/smoke/{0-nvme-loop distro/ubuntu_18.04 fixed-2 mon_election/connectivity start} | 2 | |
dead | 6640215 | 2022-01-25 22:18:32 | 2022-01-25 22:25:00 | 2022-01-26 00:27:59 | 2:02:59 | smithi | master | centos | 8.stream | rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.stream_container_tools 1-bootstrap/16.2.4 1-start 2-nfs 3-upgrade-with-workload 4-final} | 2 | |||
dead | 6640216 | 2022-01-25 22:18:33 | 2022-01-25 22:25:00 | 2022-01-26 00:28:03 | 2:03:03 | smithi | master | centos | 8.stream | rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.stream_container_tools 1-bootstrap/octopus 1-start 2-nfs 3-upgrade-with-workload 4-final} | 2 | |||
fail | 6640217 | 2022-01-25 22:18:34 | 2022-01-25 22:26:51 | 2022-01-25 22:50:35 | 0:23:44 | 0:13:11 | 0:10:33 | smithi | master | ubuntu | 18.04 | rados/cephadm/osds/{0-distro/ubuntu_18.04 0-nvme-loop 1-start 2-ops/rm-zap-add} | 2 | |
Failure Reason:
Command failed on smithi166 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5bfc361ee5ddb060baaa8b6c472feff0ae2b176c shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 9a855d6e-7e2f-11ec-8c35-001a4aab830c -- bash -c \'set -e\nset -x\nceph orch ps\nceph orch device ls\nDEVID=$(ceph device ls | grep osd.1 | awk \'"\'"\'{print $1}\'"\'"\')\nHOST=$(ceph orch device ls | grep $DEVID | awk \'"\'"\'{print $1}\'"\'"\')\nDEV=$(ceph orch device ls | grep $DEVID | awk \'"\'"\'{print $2}\'"\'"\')\necho "host $HOST, dev $DEV, devid $DEVID"\nceph orch osd rm 1\nwhile ceph orch osd rm status | grep ^1 ; do sleep 5 ; done\nceph orch device zap $HOST $DEV --force\nceph orch daemon add osd $HOST:$DEV\nwhile ! ceph osd dump | grep osd.1 | grep up ; do sleep 5 ; done\n\'' |
||||||||||||||
dead | 6640218 | 2022-01-25 22:18:35 | 2022-01-25 22:27:01 | 2022-01-26 00:28:51 | 2:01:50 | smithi | master | centos | 8.3 | rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.3_container_tools_3.0 1-bootstrap/16.2.4 1-start 2-nfs 3-upgrade-with-workload 4-final} | 2 | |||
fail | 6640219 | 2022-01-25 22:18:36 | 2022-01-25 22:27:02 | 2022-01-25 22:45:44 | 0:18:42 | 0:06:27 | 0:12:15 | smithi | master | ubuntu | 20.04 | rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 1-rook 2-workload/none 3-final cluster/3-node k8s/1.21 net/calico rook/master} | 3 | |
Failure Reason:
[Errno 2] Cannot find file on the remote 'ubuntu@smithi062.front.sepia.ceph.com': 'rook/cluster/examples/kubernetes/ceph/operator.yaml' |
||||||||||||||
dead | 6640220 | 2022-01-25 22:18:37 | 2022-01-25 22:28:12 | 2022-01-26 00:27:53 | 1:59:41 | smithi | master | centos | 8.3 | rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.3_container_tools_3.0 1-bootstrap/octopus 1-start 2-nfs 3-upgrade-with-workload 4-final} | 2 | |||
pass | 6640221 | 2022-01-25 22:18:38 | 2022-01-25 22:28:53 | 2022-01-25 22:57:18 | 0:28:25 | 0:18:16 | 0:10:09 | smithi | master | ubuntu | 18.04 | rados/cephadm/smoke/{0-nvme-loop distro/ubuntu_18.04 fixed-2 mon_election/connectivity start} | 2 | |
dead | 6640222 | 2022-01-25 22:18:39 | 2022-01-25 22:29:33 | 2022-01-26 00:28:59 | 1:59:26 | smithi | master | centos | 8.stream | rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.stream_container_tools 1-bootstrap/16.2.4 1-start 2-nfs 3-upgrade-with-workload 4-final} | 2 | |||
fail | 6640223 | 2022-01-25 22:18:41 | 2022-01-25 22:29:43 | 2022-01-25 22:53:41 | 0:23:58 | 0:13:05 | 0:10:53 | smithi | master | ubuntu | 18.04 | rados/cephadm/osds/{0-distro/ubuntu_18.04 0-nvme-loop 1-start 2-ops/rm-zap-wait} | 2 | |
Failure Reason:
Command failed on smithi082 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5bfc361ee5ddb060baaa8b6c472feff0ae2b176c shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 08d496e0-7e30-11ec-8c35-001a4aab830c -- bash -c \'set -e\nset -x\nceph orch ps\nceph orch device ls\nDEVID=$(ceph device ls | grep osd.1 | awk \'"\'"\'{print $1}\'"\'"\')\nHOST=$(ceph orch device ls | grep $DEVID | awk \'"\'"\'{print $1}\'"\'"\')\nDEV=$(ceph orch device ls | grep $DEVID | awk \'"\'"\'{print $2}\'"\'"\')\necho "host $HOST, dev $DEV, devid $DEVID"\nceph orch osd rm 1\nwhile ceph orch osd rm status | grep ^1 ; do sleep 5 ; done\nceph orch device zap $HOST $DEV --force\nwhile ! ceph osd dump | grep osd.1 | grep up ; do sleep 5 ; done\n\'' |
||||||||||||||
dead | 6640224 | 2022-01-25 22:18:42 | 2022-01-25 22:30:14 | 2022-01-26 00:28:42 | 1:58:28 | smithi | master | centos | 8.stream | rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.stream_container_tools 1-bootstrap/octopus 1-start 2-nfs 3-upgrade-with-workload 4-final} | 2 |