Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
pass 6966106 2022-08-10 20:38:11 2022-08-10 20:43:46 2022-08-10 21:12:19 0:28:33 0:16:53 0:11:40 smithi main ubuntu 20.04 rados/singleton/{all/osd-recovery mon_election/connectivity msgr-failures/none msgr/async-v2only objectstore/bluestore-hybrid rados supported-random-distro$/{ubuntu_latest}} 1
fail 6966107 2022-08-10 20:38:12 2022-08-10 20:44:37 2022-08-10 21:00:06 0:15:29 0:06:13 0:09:16 smithi main ubuntu 20.04 rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 1-rook 2-workload/radosbench 3-final cluster/1-node k8s/1.21 net/calico rook/master} 1
Failure Reason:

[Errno 2] Cannot find file on the remote 'ubuntu@smithi079.front.sepia.ceph.com': 'rook/cluster/examples/kubernetes/ceph/operator.yaml'

fail 6966108 2022-08-10 20:38:13 2022-08-10 20:44:37 2022-08-10 21:14:24 0:29:47 0:21:20 0:08:27 smithi main centos 8.stream rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.stream_container_tools 1-bootstrap/16.2.4 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
Failure Reason:

Command failed on smithi077 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image docker.io/ceph/ceph:v16.2.4 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 1367011a-18ef-11ed-8431-001a4aab830c -e sha1=cee46c3e5b9015d27983e08f8ebddfb22d21d78e -- bash -c \'ceph versions | jq -e \'"\'"\'.overall | length == 1\'"\'"\'\''

pass 6966109 2022-08-10 20:38:14 2022-08-10 20:45:27 2022-08-10 21:11:28 0:26:01 0:14:23 0:11:38 smithi main ubuntu 20.04 rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/many msgr/async objectstore/bluestore-hybrid rados supported-random-distro$/{ubuntu_latest} tasks/rados_python} 2
pass 6966110 2022-08-10 20:38:15 2022-08-10 20:47:58 2022-08-10 21:10:52 0:22:54 0:16:25 0:06:29 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/connectivity task/test_cephadm} 1
fail 6966111 2022-08-10 20:38:17 2022-08-10 20:47:58 2022-08-10 21:20:39 0:32:41 0:21:08 0:11:33 smithi main ubuntu 18.04 rados/rook/smoke/{0-distro/ubuntu_18.04 0-kubeadm 1-rook 2-workload/radosbench 3-final cluster/1-node k8s/1.21 net/calico rook/1.6.2} 1
Failure Reason:

'wait for operator' reached maximum tries (90) after waiting for 900 seconds

fail 6966112 2022-08-10 20:38:18 2022-08-10 20:49:29 2022-08-10 21:17:36 0:28:07 0:15:35 0:12:32 smithi main ubuntu 18.04 rados/cephadm/osds/{0-distro/ubuntu_18.04 0-nvme-loop 1-start 2-ops/rm-zap-add} 2
Failure Reason:

Command failed on smithi099 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:cee46c3e5b9015d27983e08f8ebddfb22d21d78e shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 157ab5fe-18f0-11ed-8431-001a4aab830c -- bash -c \'set -e\nset -x\nceph orch ps\nceph orch device ls\nDEVID=$(ceph device ls | grep osd.1 | awk \'"\'"\'{print $1}\'"\'"\')\nHOST=$(ceph orch device ls | grep $DEVID | awk \'"\'"\'{print $1}\'"\'"\')\nDEV=$(ceph orch device ls | grep $DEVID | awk \'"\'"\'{print $2}\'"\'"\')\necho "host $HOST, dev $DEV, devid $DEVID"\nceph orch osd rm 1\nwhile ceph orch osd rm status | grep ^1 ; do sleep 5 ; done\nceph orch device zap $HOST $DEV --force\nceph orch daemon add osd $HOST:$DEV\nwhile ! ceph osd dump | grep osd.1 | grep up ; do sleep 5 ; done\n\''

fail 6966113 2022-08-10 20:38:19 2022-08-10 20:52:20 2022-08-10 21:15:26 0:23:06 0:16:30 0:06:36 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/classic task/test_cephadm} 1
Failure Reason:

SELinux denials found on ubuntu@smithi161.front.sepia.ceph.com: ['type=AVC msg=audit(1660165965.542:18397): avc: denied { ioctl } for pid=122087 comm="iptables" path="/var/lib/containers/storage/overlay/6d8735a35cbbfbf92fb575d9d0e279a4a9ad8e5eeaa49603611113da8378eff3/merged" dev="overlay" ino=3934132 scontext=system_u:system_r:iptables_t:s0 tcontext=system_u:object_r:container_file_t:s0:c1022,c1023 tclass=dir permissive=1']

fail 6966114 2022-08-10 20:38:20 2022-08-10 20:52:20 2022-08-10 21:24:41 0:32:21 0:22:42 0:09:39 smithi main centos 8.stream rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.stream_container_tools 1-bootstrap/octopus 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
Failure Reason:

Command failed on smithi192 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image docker.io/ceph/ceph:v15 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 846cf08a-18f0-11ed-8431-001a4aab830c -e sha1=cee46c3e5b9015d27983e08f8ebddfb22d21d78e -- bash -c \'ceph versions | jq -e \'"\'"\'.overall | length == 1\'"\'"\'\''

fail 6966115 2022-08-10 20:38:22 2022-08-10 20:55:31 2022-08-10 21:25:04 0:29:33 0:21:32 0:08:01 smithi main centos 8.stream rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.stream_container_tools 1-bootstrap/16.2.4 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
Failure Reason:

Command failed on smithi116 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image docker.io/ceph/ceph:v16.2.4 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 9251fbc8-18f0-11ed-8431-001a4aab830c -e sha1=cee46c3e5b9015d27983e08f8ebddfb22d21d78e -- bash -c \'ceph versions | jq -e \'"\'"\'.overall | length == 1\'"\'"\'\''

fail 6966116 2022-08-10 20:38:23 2022-08-10 20:56:21 2022-08-10 21:17:30 0:21:09 0:07:01 0:14:08 smithi main ubuntu 20.04 rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 1-rook 2-workload/none 3-final cluster/3-node k8s/1.21 net/calico rook/master} 3
Failure Reason:

[Errno 2] Cannot find file on the remote 'ubuntu@smithi046.front.sepia.ceph.com': 'rook/cluster/examples/kubernetes/ceph/operator.yaml'

pass 6966117 2022-08-10 20:38:24 2022-08-10 21:00:12 2022-08-10 21:24:36 0:24:24 0:16:26 0:07:58 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/connectivity task/test_cephadm} 1
fail 6966118 2022-08-10 20:38:25 2022-08-10 21:01:43 2022-08-10 21:24:36 0:22:53 0:16:46 0:06:07 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/classic task/test_cephadm} 1
Failure Reason:

SELinux denials found on ubuntu@smithi098.front.sepia.ceph.com: ['type=AVC msg=audit(1660166532.826:18361): avc: denied { ioctl } for pid=122271 comm="iptables" path="/var/lib/containers/storage/overlay/460d73e18a1f017a19c755104edb5267d0827985ecb73683a588cc38b25f1232/merged" dev="overlay" ino=3805289 scontext=system_u:system_r:iptables_t:s0 tcontext=system_u:object_r:container_file_t:s0:c1022,c1023 tclass=dir permissive=1']

fail 6966119 2022-08-10 20:38:27 2022-08-10 21:01:43 2022-08-10 21:33:11 0:31:28 0:22:48 0:08:40 smithi main centos 8.stream rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.stream_container_tools 1-bootstrap/octopus 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
Failure Reason:

Command failed on smithi003 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image docker.io/ceph/ceph:v15 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid b9eb01a6-18f1-11ed-8431-001a4aab830c -e sha1=cee46c3e5b9015d27983e08f8ebddfb22d21d78e -- bash -c \'ceph versions | jq -e \'"\'"\'.overall | length == 1\'"\'"\'\''

fail 6966120 2022-08-10 20:38:28 2022-08-10 21:04:24 2022-08-10 21:45:27 0:41:03 0:27:05 0:13:58 smithi main ubuntu 18.04 rados/rook/smoke/{0-distro/ubuntu_18.04 0-kubeadm 1-rook 2-workload/radosbench 3-final cluster/3-node k8s/1.21 net/calico rook/1.6.2} 3
Failure Reason:

'check osd count' reached maximum tries (90) after waiting for 900 seconds