Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
pass 6882177 2022-06-16 16:16:22 2022-06-16 16:31:37 2022-06-16 17:08:46 0:37:09 0:29:00 0:08:09 smithi main centos 8.stream rados/thrash-erasure-code-overwrites/{bluestore-bitmap ceph clusters/{fixed-2 openstack} fast/normal mon_election/connectivity msgr-failures/few rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{centos_8} thrashers/minsize_recovery thrashosds-health workloads/ec-pool-snaps-few-objects-overwrites} 2
pass 6882178 2022-06-16 16:16:32 2022-06-16 16:33:27 2022-06-16 17:07:36 0:34:09 0:24:15 0:09:54 smithi main ubuntu 18.04 rados/cephadm/smoke/{0-nvme-loop distro/ubuntu_18.04 fixed-2 mon_election/classic start} 2
fail 6882179 2022-06-16 16:16:41 2022-06-16 16:34:18 2022-06-16 16:51:43 0:17:25 0:08:40 0:08:45 smithi main ubuntu 20.04 rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 1-rook 2-workload/radosbench 3-final cluster/1-node k8s/1.21 net/calico rook/master} 1
Failure Reason:

[Errno 2] Cannot find file on the remote 'ubuntu@smithi130.front.sepia.ceph.com': 'rook/cluster/examples/kubernetes/ceph/operator.yaml'

pass 6882180 2022-06-16 16:16:51 2022-06-16 16:34:18 2022-06-16 17:39:59 1:05:41 0:55:56 0:09:45 smithi main ubuntu 18.04 rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/nautilus-v2only backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{ubuntu_18.04} mon_election/connectivity msgr-failures/few rados thrashers/mapgap thrashosds-health workloads/snaps-few-objects} 3
fail 6882181 2022-06-16 16:17:01 2022-06-16 16:34:39 2022-06-16 17:18:45 0:44:06 0:31:57 0:12:09 smithi main centos 8.stream rados/dashboard/{centos_8.stream_container_tools clusters/{2-node-mgr} debug/mgr mon_election/classic random-objectstore$/{bluestore-stupid} supported-random-distro$/{centos_8} tasks/dashboard} 2
Failure Reason:

Test failure: test_version (tasks.mgr.dashboard.test_api.VersionReqTest)

fail 6882182 2022-06-16 16:17:10 2022-06-16 16:35:29 2022-06-16 17:02:19 0:26:50 0:15:24 0:11:26 smithi main ubuntu 20.04 rados/rest/{mgr-restful supported-random-distro$/{ubuntu_latest}} 1
Failure Reason:

Command failed (workunit test rest/test-restful.sh) on smithi202 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.a/client.a/tmp && cd -- /home/ubuntu/cephtest/mnt.a/client.a/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=224fc22e07cebeecc3e08055cfd6105b1a30f173 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="a" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.a CEPH_ROOT=/home/ubuntu/cephtest/clone.client.a CEPH_MNT=/home/ubuntu/cephtest/mnt.a adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.a/qa/workunits/rest/test-restful.sh'

fail 6882183 2022-06-16 16:17:19 2022-06-16 16:37:00 2022-06-16 17:08:08 0:31:08 0:23:37 0:07:31 smithi main centos 8.stream rados/cephadm/dashboard/{0-distro/centos_8.stream_container_tools task/test_e2e} 2
Failure Reason:

Command failed (workunit test cephadm/test_dashboard_e2e.sh) on smithi138 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=224fc22e07cebeecc3e08055cfd6105b1a30f173 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_dashboard_e2e.sh'

fail 6882184 2022-06-16 16:17:28 2022-06-16 16:37:00 2022-06-16 17:08:53 0:31:53 0:23:14 0:08:39 smithi main centos 8.stream rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.stream_container_tools 1-bootstrap/16.2.4 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
Failure Reason:

Command failed on smithi158 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image docker.io/ceph/ceph:v16.2.4 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d96aee7a-ed94-11ec-8427-001a4aab830c -e sha1=224fc22e07cebeecc3e08055cfd6105b1a30f173 -- bash -c \'ceph versions | jq -e \'"\'"\'.overall | length == 1\'"\'"\'\''

pass 6882185 2022-06-16 16:17:37 2022-06-16 16:39:01 2022-06-16 17:03:58 0:24:57 0:17:11 0:07:46 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/connectivity task/test_cephadm} 1
pass 6882186 2022-06-16 16:17:49 2022-06-16 16:40:32 2022-06-16 17:43:21 1:02:49 0:53:40 0:09:09 smithi main centos 8.stream rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-bitmap rados tasks/rados_api_tests validater/valgrind} 2
pass 6882187 2022-06-16 16:17:59 2022-06-16 16:41:33 2022-06-16 17:11:42 0:30:09 0:20:22 0:09:47 smithi main ubuntu 18.04 rados/cephadm/smoke-roleless/{0-distro/ubuntu_18.04 0-nvme-loop 1-start 2-services/nfs2 3-final} 2
pass 6882188 2022-06-16 16:18:08 2022-06-16 16:42:14 2022-06-16 17:18:07 0:35:53 0:26:09 0:09:44 smithi main ubuntu 18.04 rados/cephadm/smoke/{0-nvme-loop distro/ubuntu_18.04 fixed-2 mon_election/connectivity start} 2
pass 6882189 2022-06-16 16:18:17 2022-06-16 16:42:44 2022-06-16 17:24:53 0:42:09 0:33:56 0:08:13 smithi main centos 8.stream rados/thrash-erasure-code-overwrites/{bluestore-bitmap ceph clusters/{fixed-2 openstack} fast/normal mon_election/connectivity msgr-failures/osd-dispatch-delay rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{centos_8} thrashers/pggrow thrashosds-health workloads/ec-small-objects-overwrites} 2
pass 6882190 2022-06-16 16:18:27 2022-06-16 21:28:15 2022-06-16 22:17:32 0:49:17 0:38:12 0:11:05 smithi main ubuntu 18.04 rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/luminous backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{ubuntu_18.04} mon_election/connectivity msgr-failures/osd-delay rados thrashers/default thrashosds-health workloads/rbd_cls} 3
fail 6882191 2022-06-16 16:18:37 2022-06-16 21:29:47 2022-06-16 22:01:32 0:31:45 0:22:41 0:09:04 smithi main ubuntu 18.04 rados/rook/smoke/{0-distro/ubuntu_18.04 0-kubeadm 1-rook 2-workload/radosbench 3-final cluster/1-node k8s/1.21 net/calico rook/1.6.2} 1
Failure Reason:

'wait for operator' reached maximum tries (90) after waiting for 900 seconds

pass 6882192 2022-06-16 16:18:45 2022-06-16 21:30:08 2022-06-16 22:42:44 1:12:36 1:02:13 0:10:23 smithi main ubuntu 18.04 rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/mimic-v1only backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{ubuntu_18.04} mon_election/classic msgr-failures/fastclose rados thrashers/mapgap thrashosds-health workloads/snaps-few-objects} 3
pass 6882193 2022-06-16 16:18:55 2022-06-16 21:31:09 2022-06-16 21:58:59 0:27:50 0:19:02 0:08:48 smithi main ubuntu 18.04 rados/cephadm/smoke-roleless/{0-distro/ubuntu_18.04 0-nvme-loop 1-start 2-services/iscsi 3-final} 2
fail 6882194 2022-06-16 16:19:05 2022-06-16 21:31:20 2022-06-16 22:00:18 0:28:58 0:18:07 0:10:51 smithi main ubuntu 18.04 rados/cephadm/osds/{0-distro/ubuntu_18.04 0-nvme-loop 1-start 2-ops/rm-zap-add} 2
Failure Reason:

Command failed on smithi025 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:224fc22e07cebeecc3e08055cfd6105b1a30f173 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 03802256-edbe-11ec-8427-001a4aab830c -- bash -c \'set -e\nset -x\nceph orch ps\nceph orch device ls\nDEVID=$(ceph device ls | grep osd.1 | awk \'"\'"\'{print $1}\'"\'"\')\nHOST=$(ceph orch device ls | grep $DEVID | awk \'"\'"\'{print $1}\'"\'"\')\nDEV=$(ceph orch device ls | grep $DEVID | awk \'"\'"\'{print $2}\'"\'"\')\necho "host $HOST, dev $DEV, devid $DEVID"\nceph orch osd rm 1\nwhile ceph orch osd rm status | grep ^1 ; do sleep 5 ; done\nceph orch device zap $HOST $DEV --force\nceph orch daemon add osd $HOST:$DEV\nwhile ! ceph osd dump | grep osd.1 | grep up ; do sleep 5 ; done\n\''

pass 6882195 2022-06-16 16:19:16 2022-06-16 21:31:31 2022-06-16 22:20:49 0:49:18 0:37:45 0:11:33 smithi main ubuntu 18.04 rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/nautilus-v1only backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{ubuntu_18.04} mon_election/classic msgr-failures/osd-delay rados thrashers/none thrashosds-health workloads/cache-snaps} 3
fail 6882196 2022-06-16 16:19:25 2022-06-16 21:32:22 2022-06-16 22:03:41 0:31:19 0:23:29 0:07:50 smithi main centos 8.stream rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.stream_container_tools 1-bootstrap/octopus 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
Failure Reason:

Command failed on smithi044 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image docker.io/ceph/ceph:v15 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid b94eae32-edbd-11ec-8427-001a4aab830c -e sha1=224fc22e07cebeecc3e08055cfd6105b1a30f173 -- bash -c \'ceph versions | jq -e \'"\'"\'.overall | length == 1\'"\'"\'\''

pass 6882197 2022-06-16 16:19:34 2022-06-16 21:32:44 2022-06-16 22:06:09 0:33:25 0:23:42 0:09:43 smithi main ubuntu 18.04 rados/cephadm/smoke/{0-nvme-loop distro/ubuntu_18.04 fixed-2 mon_election/classic start} 2
pass 6882198 2022-06-16 16:19:44 2022-06-16 21:32:44 2022-06-16 22:04:44 0:32:00 0:22:07 0:09:53 smithi main ubuntu 18.04 rados/cephadm/smoke-roleless/{0-distro/ubuntu_18.04 0-nvme-loop 1-start 2-services/nfs-ingress2 3-final} 2
pass 6882199 2022-06-16 16:19:52 2022-06-16 21:32:55 2022-06-17 00:51:21 3:18:26 3:06:50 0:11:36 smithi main ubuntu 18.04 rados/upgrade/nautilus-x-singleton/{0-cluster/{openstack start} 1-install/nautilus 2-partial-upgrade/firsthalf 3-thrash/default 4-workload/{rbd-cls rbd-import-export readwrite snaps-few-objects} 5-workload/{radosbench rbd_api} 6-finish-upgrade 7-pacific 8-workload/{rbd-python snaps-many-objects} bluestore-bitmap mon_election/connectivity thrashosds-health ubuntu_18.04} 4
fail 6882200 2022-06-16 16:20:00 2022-06-16 21:35:17 2022-06-16 22:06:35 0:31:18 0:24:06 0:07:12 smithi main centos 8.stream rados/cephadm/dashboard/{0-distro/centos_8.stream_container_tools task/test_e2e} 2
Failure Reason:

Command failed (workunit test cephadm/test_dashboard_e2e.sh) on smithi057 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=224fc22e07cebeecc3e08055cfd6105b1a30f173 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_dashboard_e2e.sh'

fail 6882201 2022-06-16 16:20:10 2022-06-16 21:35:27 2022-06-16 22:05:12 0:29:45 0:23:03 0:06:42 smithi main centos 8.stream rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.stream_container_tools 1-bootstrap/16.2.4 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
Failure Reason:

Command failed on smithi089 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image docker.io/ceph/ceph:v16.2.4 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 434c445a-edbe-11ec-8427-001a4aab830c -e sha1=224fc22e07cebeecc3e08055cfd6105b1a30f173 -- bash -c \'ceph versions | jq -e \'"\'"\'.overall | length == 1\'"\'"\'\''

fail 6882202 2022-06-16 16:20:19 2022-06-16 21:35:58 2022-06-16 21:58:01 0:22:03 0:10:31 0:11:32 smithi main ubuntu 20.04 rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 1-rook 2-workload/none 3-final cluster/3-node k8s/1.21 net/calico rook/master} 3
Failure Reason:

[Errno 2] Cannot find file on the remote 'ubuntu@smithi002.front.sepia.ceph.com': 'rook/cluster/examples/kubernetes/ceph/operator.yaml'

fail 6882203 2022-06-16 16:20:28 2022-06-16 21:36:29 2022-06-16 22:02:06 0:25:37 0:17:22 0:08:15 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/connectivity task/test_cephadm} 1
Failure Reason:

SELinux denials found on ubuntu@smithi203.front.sepia.ceph.com: ['type=AVC msg=audit(1655416700.190:18128): avc: denied { ioctl } for pid=118200 comm="iptables" path="/var/lib/containers/storage/overlay/178eb94c3500274d6338a9f1af6297d9b82ea39e954155cafc8ba11c0f5dc07a/merged" dev="overlay" ino=3803085 scontext=system_u:system_r:iptables_t:s0 tcontext=system_u:object_r:container_file_t:s0:c1022,c1023 tclass=dir permissive=1', 'type=AVC msg=audit(1655416700.292:18131): avc: denied { ioctl } for pid=118222 comm="iptables" path="/var/lib/containers/storage/overlay/178eb94c3500274d6338a9f1af6297d9b82ea39e954155cafc8ba11c0f5dc07a/merged" dev="overlay" ino=3803085 scontext=system_u:system_r:iptables_t:s0 tcontext=system_u:object_r:container_file_t:s0:c1022,c1023 tclass=dir permissive=1']

pass 6882204 2022-06-16 16:20:38 2022-06-16 21:36:49 2022-06-16 22:24:36 0:47:47 0:37:23 0:10:24 smithi main ubuntu 18.04 rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/luminous-v1only backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{ubuntu_18.04} mon_election/classic msgr-failures/fastclose rados thrashers/mapgap thrashosds-health workloads/test_rbd_api} 3
pass 6882205 2022-06-16 16:20:48 2022-06-16 21:37:00 2022-06-16 22:07:45 0:30:45 0:19:02 0:11:43 smithi main ubuntu 18.04 rados/cephadm/smoke-roleless/{0-distro/ubuntu_18.04 0-nvme-loop 1-start 2-services/basic 3-final} 2
pass 6882206 2022-06-16 16:20:58 2022-06-16 21:38:21 2022-06-16 22:12:05 0:33:44 0:24:36 0:09:08 smithi main ubuntu 18.04 rados/cephadm/smoke/{0-nvme-loop distro/ubuntu_18.04 fixed-2 mon_election/connectivity start} 2
fail 6882207 2022-06-16 16:21:07 2022-06-16 21:38:31 2022-06-16 21:56:43 0:18:12 smithi main ubuntu 18.04 rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/mimic backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{ubuntu_18.04} mon_election/connectivity msgr-failures/fastclose rados thrashers/pggrow thrashosds-health workloads/rbd_cls} 3
Failure Reason:

Cannot connect to remote host smithi061

pass 6882208 2022-06-16 16:21:15 2022-06-16 21:38:43 2022-06-16 22:10:47 0:32:04 0:21:31 0:10:33 smithi main ubuntu 18.04 rados/cephadm/smoke-roleless/{0-distro/ubuntu_18.04 0-nvme-loop 1-start 2-services/nfs-ingress-rgw-user 3-final} 2
pass 6882209 2022-06-16 16:21:26 2022-06-16 21:39:13 2022-06-16 22:26:56 0:47:43 0:38:04 0:09:39 smithi main ubuntu 18.04 rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/nautilus-v2only backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{ubuntu_18.04} mon_election/connectivity msgr-failures/osd-delay rados thrashers/default thrashosds-health workloads/test_rbd_api} 3
fail 6882210 2022-06-16 16:21:35 2022-06-16 21:39:24 2022-06-16 22:10:55 0:31:31 0:24:07 0:07:24 smithi main centos 8.stream rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.stream_container_tools 1-bootstrap/octopus 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
Failure Reason:

Command failed on smithi033 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image docker.io/ceph/ceph:v15 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid e2f1a5cc-edbe-11ec-8427-001a4aab830c -e sha1=224fc22e07cebeecc3e08055cfd6105b1a30f173 -- bash -c \'ceph versions | jq -e \'"\'"\'.overall | length == 1\'"\'"\'\''

fail 6882211 2022-06-16 16:21:45 2022-06-16 21:39:55 2022-06-16 22:19:24 0:39:29 0:31:00 0:08:29 smithi main ubuntu 18.04 rados/rook/smoke/{0-distro/ubuntu_18.04 0-kubeadm 1-rook 2-workload/radosbench 3-final cluster/3-node k8s/1.21 net/calico rook/1.6.2} 3
Failure Reason:

'check osd count' reached maximum tries (90) after waiting for 900 seconds