Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
fail 6917423 2022-07-06 14:00:03 2022-07-06 14:18:31 2022-07-06 14:31:27 0:12:56 0:06:56 0:06:00 smithi main ubuntu 20.04 rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 1-rook 2-workload/radosbench 3-final cluster/1-node k8s/1.21 net/calico rook/master} 1
Failure Reason:

[Errno 2] Cannot find file on the remote 'ubuntu@smithi026.front.sepia.ceph.com': 'rook/cluster/examples/kubernetes/ceph/operator.yaml'

fail 6917424 2022-07-06 14:00:05 2022-07-06 14:18:32 2022-07-06 14:48:57 0:30:25 0:22:15 0:08:10 smithi main centos 8.stream rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.stream_container_tools 1-bootstrap/16.2.4 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
Failure Reason:

Command failed on smithi078 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image docker.io/ceph/ceph:v16.2.4 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 7236970a-fd38-11ec-842d-001a4aab830c -e sha1=2103eaf02dd3c9da5290a33a5b7ec7f42138d76a -- bash -c \'ceph versions | jq -e \'"\'"\'.overall | length == 1\'"\'"\'\''

pass 6917425 2022-07-06 14:00:06 2022-07-06 14:19:43 2022-07-06 14:43:00 0:23:17 0:16:52 0:06:25 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/connectivity task/test_cephadm} 1
fail 6917426 2022-07-06 14:00:07 2022-07-06 14:19:53 2022-07-06 14:49:17 0:29:24 0:21:56 0:07:28 smithi main ubuntu 18.04 rados/rook/smoke/{0-distro/ubuntu_18.04 0-kubeadm 1-rook 2-workload/radosbench 3-final cluster/1-node k8s/1.21 net/calico rook/1.6.2} 1
Failure Reason:

'wait for operator' reached maximum tries (90) after waiting for 900 seconds

fail 6917427 2022-07-06 14:00:09 2022-07-06 14:19:54 2022-07-06 14:44:23 0:24:29 0:16:22 0:08:07 smithi main ubuntu 18.04 rados/cephadm/osds/{0-distro/ubuntu_18.04 0-nvme-loop 1-start 2-ops/rm-zap-add} 2
Failure Reason:

Command failed on smithi111 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:2103eaf02dd3c9da5290a33a5b7ec7f42138d76a shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 5d516cca-fd38-11ec-842d-001a4aab830c -- bash -c \'set -e\nset -x\nceph orch ps\nceph orch device ls\nDEVID=$(ceph device ls | grep osd.1 | awk \'"\'"\'{print $1}\'"\'"\')\nHOST=$(ceph orch device ls | grep $DEVID | awk \'"\'"\'{print $1}\'"\'"\')\nDEV=$(ceph orch device ls | grep $DEVID | awk \'"\'"\'{print $2}\'"\'"\')\necho "host $HOST, dev $DEV, devid $DEVID"\nceph orch osd rm 1\nwhile ceph orch osd rm status | grep ^1 ; do sleep 5 ; done\nceph orch device zap $HOST $DEV --force\nceph orch daemon add osd $HOST:$DEV\nwhile ! ceph osd dump | grep osd.1 | grep up ; do sleep 5 ; done\n\''

fail 6917428 2022-07-06 14:00:10 2022-07-06 14:20:24 2022-07-06 14:43:43 0:23:19 0:16:50 0:06:29 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/classic task/test_cephadm} 1
Failure Reason:

SELinux denials found on ubuntu@smithi161.front.sepia.ceph.com: ['type=AVC msg=audit(1657118479.768:18190): avc: denied { ioctl } for pid=118878 comm="iptables" path="/var/lib/containers/storage/overlay/648912ad42e2cfd834fee0032745e066599534db0590c1977312dc970b304121/merged" dev="overlay" ino=3805301 scontext=system_u:system_r:iptables_t:s0 tcontext=system_u:object_r:container_file_t:s0:c1022,c1023 tclass=dir permissive=1', 'type=AVC msg=audit(1657118479.843:18193): avc: denied { ioctl } for pid=118895 comm="iptables" path="/var/lib/containers/storage/overlay/648912ad42e2cfd834fee0032745e066599534db0590c1977312dc970b304121/merged" dev="overlay" ino=3805301 scontext=system_u:system_r:iptables_t:s0 tcontext=system_u:object_r:container_file_t:s0:c1022,c1023 tclass=dir permissive=1']

fail 6917429 2022-07-06 14:00:11 2022-07-06 14:20:35 2022-07-06 14:49:35 0:29:00 0:22:44 0:06:16 smithi main centos 8.stream rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.stream_container_tools 1-bootstrap/octopus 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
Failure Reason:

Command failed on smithi131 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image docker.io/ceph/ceph:v15 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 89741ec4-fd38-11ec-842d-001a4aab830c -e sha1=2103eaf02dd3c9da5290a33a5b7ec7f42138d76a -- bash -c \'ceph versions | jq -e \'"\'"\'.overall | length == 1\'"\'"\'\''

fail 6917430 2022-07-06 14:00:13 2022-07-06 14:20:46 2022-07-06 14:32:31 0:11:45 0:05:21 0:06:24 smithi main ubuntu 20.04 rados/dashboard/{centos_8.stream_container_tools clusters/{2-node-mgr} debug/mgr mon_election/connectivity random-objectstore$/{bluestore-comp-snappy} supported-random-distro$/{ubuntu_latest} tasks/dashboard} 2
Failure Reason:

Command failed on smithi019 with status 1: 'TESTDIR=/home/ubuntu/cephtest bash -s'

pass 6917431 2022-07-06 14:00:14 2022-07-06 14:21:07 2022-07-06 14:56:47 0:35:40 0:28:28 0:07:12 smithi main centos 8.stream rados/monthrash/{ceph clusters/3-mons mon_election/classic msgr-failures/mon-delay msgr/async-v1only objectstore/bluestore-comp-zstd rados supported-random-distro$/{centos_8} thrashers/force-sync-many workloads/rados_api_tests} 2
fail 6917432 2022-07-06 14:00:16 2022-07-06 14:21:17 2022-07-06 14:51:13 0:29:56 0:22:09 0:07:47 smithi main centos 8.stream rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.stream_container_tools 1-bootstrap/16.2.4 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
Failure Reason:

Command failed on smithi008 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image docker.io/ceph/ceph:v16.2.4 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid c1e5c460-fd38-11ec-842d-001a4aab830c -e sha1=2103eaf02dd3c9da5290a33a5b7ec7f42138d76a -- bash -c \'ceph versions | jq -e \'"\'"\'.overall | length == 1\'"\'"\'\''

fail 6917433 2022-07-06 14:00:17 2022-07-06 14:22:18 2022-07-06 14:36:42 0:14:24 0:07:33 0:06:51 smithi main ubuntu 20.04 rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 1-rook 2-workload/none 3-final cluster/3-node k8s/1.21 net/calico rook/master} 3
Failure Reason:

[Errno 2] Cannot find file on the remote 'ubuntu@smithi074.front.sepia.ceph.com': 'rook/cluster/examples/kubernetes/ceph/operator.yaml'

fail 6917434 2022-07-06 14:00:20 2022-07-06 14:23:19 2022-07-06 14:47:13 0:23:54 0:16:27 0:07:27 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/connectivity task/test_cephadm} 1
Failure Reason:

SELinux denials found on ubuntu@smithi104.front.sepia.ceph.com: ['type=AVC msg=audit(1657118668.270:18191): avc: denied { ioctl } for pid=119077 comm="iptables" path="/var/lib/containers/storage/overlay/37dabb09fbfa62fa5cb799965a7810c46bfd6f3f652270bbebfa5dd8f27b3303/merged" dev="overlay" ino=3805284 scontext=system_u:system_r:iptables_t:s0 tcontext=system_u:object_r:container_file_t:s0:c1022,c1023 tclass=dir permissive=1']

fail 6917435 2022-07-06 14:00:21 2022-07-06 14:24:09 2022-07-06 14:48:44 0:24:35 0:16:46 0:07:49 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/classic task/test_cephadm} 1
Failure Reason:

SELinux denials found on ubuntu@smithi066.front.sepia.ceph.com: ['type=AVC msg=audit(1657118781.193:18188): avc: denied { ioctl } for pid=118951 comm="iptables" path="/var/lib/containers/storage/overlay/ab6ea56de7be2d7ae6ea364d9e53014b18637f9f3e9dea1983b84178df9ae48e/merged" dev="overlay" ino=3805324 scontext=system_u:system_r:iptables_t:s0 tcontext=system_u:object_r:container_file_t:s0:c1022,c1023 tclass=dir permissive=1']

fail 6917436 2022-07-06 14:00:23 2022-07-06 14:24:10 2022-07-06 14:52:55 0:28:45 0:23:07 0:05:38 smithi main centos 8.stream rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.stream_container_tools 1-bootstrap/octopus 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
Failure Reason:

Command failed on smithi035 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image docker.io/ceph/ceph:v15 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid fbe5f6b2-fd38-11ec-842d-001a4aab830c -e sha1=2103eaf02dd3c9da5290a33a5b7ec7f42138d76a -- bash -c \'ceph versions | jq -e \'"\'"\'.overall | length == 1\'"\'"\'\''

fail 6917437 2022-07-06 14:00:24 2022-07-06 14:24:40 2022-07-06 15:00:42 0:36:02 0:28:25 0:07:37 smithi main ubuntu 18.04 rados/rook/smoke/{0-distro/ubuntu_18.04 0-kubeadm 1-rook 2-workload/radosbench 3-final cluster/3-node k8s/1.21 net/calico rook/1.6.2} 3
Failure Reason:

'check osd count' reached maximum tries (90) after waiting for 900 seconds