Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
fail 7040287 2022-09-21 13:09:17 2022-09-21 13:12:39 2022-09-21 13:31:46 0:19:07 0:06:33 0:12:34 smithi main ubuntu 20.04 rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 1-rook 2-workload/radosbench 3-final cluster/1-node k8s/1.21 net/calico rook/master} 1
Failure Reason:

[Errno 2] Cannot find file on the remote 'ubuntu@smithi178.front.sepia.ceph.com': 'rook/cluster/examples/kubernetes/ceph/operator.yaml'

pass 7040288 2022-09-21 13:09:18 2022-09-21 13:16:10 2022-09-21 13:39:06 0:22:56 0:16:50 0:06:06 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/connectivity task/test_cephadm} 1
pass 7040289 2022-09-21 13:09:19 2022-09-21 13:16:10 2022-09-21 13:52:14 0:36:04 0:23:53 0:12:11 smithi main ubuntu 20.04 rados/cephadm/smoke-roleless/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-services/rgw-ingress 3-final} 2
pass 7040290 2022-09-21 13:09:20 2022-09-21 13:18:11 2022-09-21 13:50:09 0:31:58 0:22:03 0:09:55 smithi main ubuntu 18.04 rados/cephadm/smoke/{0-nvme-loop distro/ubuntu_18.04 fixed-2 mon_election/connectivity start} 2
pass 7040291 2022-09-21 13:09:21 2022-09-21 13:18:41 2022-09-21 14:13:09 0:54:28 0:36:11 0:18:17 smithi main ubuntu 18.04 rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/luminous backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{ubuntu_18.04} mon_election/connectivity msgr-failures/osd-delay rados thrashers/default thrashosds-health workloads/rbd_cls} 3
pass 7040292 2022-09-21 13:09:23 2022-09-21 13:22:58 2022-09-21 14:01:52 0:38:54 0:23:33 0:15:21 smithi main ubuntu 20.04 rados/cephadm/smoke/{0-nvme-loop distro/ubuntu_20.04 fixed-2 mon_election/classic start} 2
fail 7040293 2022-09-21 13:09:24 2022-09-21 13:27:39 2022-09-21 13:42:30 0:14:51 0:06:13 0:08:38 smithi main ubuntu 18.04 rados/rook/smoke/{0-distro/ubuntu_18.04 0-kubeadm 1-rook 2-workload/radosbench 3-final cluster/1-node k8s/1.21 net/calico rook/1.6.2} 1
Failure Reason:

Command failed on smithi036 with status 1: 'kubectl create -f rook/cluster/examples/kubernetes/ceph/crds.yaml -f rook/cluster/examples/kubernetes/ceph/common.yaml -f operator.yaml'

fail 7040294 2022-09-21 13:09:25 2022-09-21 13:27:39 2022-09-21 13:56:24 0:28:45 0:16:07 0:12:38 smithi main ubuntu 18.04 rados/cephadm/osds/{0-distro/ubuntu_18.04 0-nvme-loop 1-start 2-ops/rm-zap-add} 2
Failure Reason:

Command failed on smithi112 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:bbb8d26ae07a2e35198ffb4596c3edcf2d210f2f shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 6c8427c0-39b3-11ed-8431-001a4aab830c -- bash -c \'set -e\nset -x\nceph orch ps\nceph orch device ls\nDEVID=$(ceph device ls | grep osd.1 | awk \'"\'"\'{print $1}\'"\'"\')\nHOST=$(ceph orch device ls | grep $DEVID | awk \'"\'"\'{print $1}\'"\'"\')\nDEV=$(ceph orch device ls | grep $DEVID | awk \'"\'"\'{print $2}\'"\'"\')\necho "host $HOST, dev $DEV, devid $DEVID"\nceph orch osd rm 1\nwhile ceph orch osd rm status | grep ^1 ; do sleep 5 ; done\nceph orch device zap $HOST $DEV --force\nceph orch daemon add osd $HOST:$DEV\nwhile ! ceph osd dump | grep osd.1 | grep up ; do sleep 5 ; done\n\''

pass 7040295 2022-09-21 13:09:26 2022-09-21 13:27:43 2022-09-21 13:54:04 0:26:21 0:16:25 0:09:56 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/classic task/test_cephadm} 1
pass 7040296 2022-09-21 13:09:28 2022-09-21 13:31:00 2022-09-21 14:14:08 0:43:08 0:35:55 0:07:13 smithi main centos 8.stream rados/cephadm/thrash/{0-distro/centos_8.stream_container_tools 1-start 2-thrash 3-tasks/rados_api_tests fixed-2 msgr/async-v2only root} 2
pass 7040297 2022-09-21 13:09:29 2022-09-21 13:31:03 2022-09-21 15:36:45 2:05:42 1:35:04 0:30:38 smithi main ubuntu 18.04 rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/nautilus-v2only backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{ubuntu_18.04} mon_election/connectivity msgr-failures/fastclose rados thrashers/pggrow thrashosds-health workloads/radosbench} 3
fail 7040298 2022-09-21 13:09:30 2022-09-21 13:31:11 2022-09-21 13:53:57 0:22:46 0:07:35 0:15:11 smithi main ubuntu 20.04 rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 1-rook 2-workload/none 3-final cluster/3-node k8s/1.21 net/calico rook/master} 3
Failure Reason:

[Errno 2] Cannot find file on the remote 'ubuntu@smithi106.front.sepia.ceph.com': 'rook/cluster/examples/kubernetes/ceph/operator.yaml'

fail 7040299 2022-09-21 13:09:31 2022-09-21 13:36:32 2022-09-21 13:59:41 0:23:09 0:16:01 0:07:08 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/connectivity task/test_cephadm} 1
Failure Reason:

SELinux denials found on ubuntu@smithi178.front.sepia.ceph.com: ['type=AVC msg=audit(1663768590.062:18506): avc: denied { ioctl } for pid=121625 comm="iptables" path="/var/lib/containers/storage/overlay/31dab70e5ad2e52ee5a9a20be332285f447ddb72888a59db5fed0249bed7d0b7/merged" dev="overlay" ino=3803257 scontext=system_u:system_r:iptables_t:s0 tcontext=system_u:object_r:container_file_t:s0:c1022,c1023 tclass=dir permissive=1']

pass 7040300 2022-09-21 13:09:32 2022-09-21 13:36:32 2022-09-21 14:09:17 0:32:45 0:22:00 0:10:45 smithi main ubuntu 18.04 rados/cephadm/smoke/{0-nvme-loop distro/ubuntu_18.04 fixed-2 mon_election/connectivity start} 2
pass 7040301 2022-09-21 13:09:34 2022-09-21 13:37:53 2022-09-21 14:13:01 0:35:08 0:23:57 0:11:11 smithi main ubuntu 20.04 rados/cephadm/smoke/{0-nvme-loop distro/ubuntu_20.04 fixed-2 mon_election/classic start} 2
pass 7040302 2022-09-21 13:09:35 2022-09-21 13:39:43 2022-09-21 14:54:30 1:14:47 0:56:55 0:17:52 smithi main ubuntu 18.04 rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/nautilus-v1only backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{ubuntu_18.04} mon_election/classic msgr-failures/few rados thrashers/careful thrashosds-health workloads/snaps-few-objects} 3
fail 7040303 2022-09-21 13:09:36 2022-09-21 13:49:05 2022-09-21 14:12:08 0:23:03 0:16:20 0:06:43 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/classic task/test_cephadm} 1
Failure Reason:

SELinux denials found on ubuntu@smithi134.front.sepia.ceph.com: ['type=AVC msg=audit(1663769352.179:18462): avc: denied { ioctl } for pid=121750 comm="iptables" path="/var/lib/containers/storage/overlay/741c0ad30ccaf1e13d76d9a99554c9453265b52a5e655ad28137f0108df52239/merged" dev="overlay" ino=3803257 scontext=system_u:system_r:iptables_t:s0 tcontext=system_u:object_r:container_file_t:s0:c1022,c1023 tclass=dir permissive=1']

fail 7040304 2022-09-21 13:09:37 2022-09-21 13:49:05 2022-09-21 14:08:03 0:18:58 smithi main ubuntu 18.04 rados/rook/smoke/{0-distro/ubuntu_18.04 0-kubeadm 1-rook 2-workload/radosbench 3-final cluster/3-node k8s/1.21 net/calico rook/1.6.2} 3
Failure Reason:

Cannot connect to remote host smithi187