Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
pass 6884381 2022-06-17 14:00:28 2022-06-17 14:01:22 2022-06-17 14:50:32 0:49:10 0:41:51 0:07:19 smithi main centos 8.stream rados/monthrash/{ceph clusters/3-mons mon_election/classic msgr-failures/mon-delay msgr/async-v1only objectstore/bluestore-stupid rados supported-random-distro$/{centos_8} thrashers/sync workloads/rados_mon_osdmap_prune} 2
fail 6884382 2022-06-17 14:00:29 2022-06-17 14:01:22 2022-06-17 14:18:02 0:16:40 0:07:14 0:09:26 smithi main ubuntu 20.04 rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 1-rook 2-workload/radosbench 3-final cluster/1-node k8s/1.21 net/calico rook/master} 1
Failure Reason:

[Errno 2] Cannot find file on the remote 'ubuntu@smithi032.front.sepia.ceph.com': 'rook/cluster/examples/kubernetes/ceph/operator.yaml'

pass 6884383 2022-06-17 14:00:31 2022-06-17 14:01:23 2022-06-17 14:30:41 0:29:18 0:19:29 0:09:49 smithi main ubuntu 18.04 rados/cephadm/smoke-roleless/{0-distro/ubuntu_18.04 0-nvme-loop 1-start 2-services/nfs-ingress-rgw-bucket 3-final} 2
fail 6884384 2022-06-17 14:00:32 2022-06-17 14:01:23 2022-06-17 14:17:20 0:15:57 0:05:28 0:10:29 smithi main ubuntu 20.04 rados/dashboard/{centos_8.stream_container_tools clusters/{2-node-mgr} debug/mgr mon_election/classic random-objectstore$/{bluestore-comp-zstd} supported-random-distro$/{ubuntu_latest} tasks/dashboard} 2
Failure Reason:

Command failed on smithi026 with status 1: 'TESTDIR=/home/ubuntu/cephtest bash -s'

fail 6884385 2022-06-17 14:00:33 2022-06-17 14:02:14 2022-06-17 14:26:01 0:23:47 0:13:24 0:10:23 smithi main ubuntu 20.04 rados/rest/{mgr-restful supported-random-distro$/{ubuntu_latest}} 1
Failure Reason:

Command failed (workunit test rest/test-restful.sh) on smithi103 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.a/client.a/tmp && cd -- /home/ubuntu/cephtest/mnt.a/client.a/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=583dad03a4c486407143b2fa31042148953bda62 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="a" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.a CEPH_ROOT=/home/ubuntu/cephtest/clone.client.a CEPH_MNT=/home/ubuntu/cephtest/mnt.a adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.a/qa/workunits/rest/test-restful.sh'

pass 6884386 2022-06-17 14:00:35 2022-06-17 14:02:14 2022-06-17 17:05:00 3:02:46 2:51:22 0:11:24 smithi main ubuntu 18.04 rados/upgrade/nautilus-x-singleton/{0-cluster/{openstack start} 1-install/nautilus 2-partial-upgrade/firsthalf 3-thrash/default 4-workload/{rbd-cls rbd-import-export readwrite snaps-few-objects} 5-workload/{radosbench rbd_api} 6-finish-upgrade 7-pacific 8-workload/{rbd-python snaps-many-objects} bluestore-bitmap mon_election/classic thrashosds-health ubuntu_18.04} 4
fail 6884387 2022-06-17 14:00:36 2022-06-17 14:03:05 2022-06-17 14:34:36 0:31:31 0:22:11 0:09:20 smithi main centos 8.stream rados/cephadm/dashboard/{0-distro/centos_8.stream_container_tools task/test_e2e} 2
Failure Reason:

Command failed (workunit test cephadm/test_dashboard_e2e.sh) on smithi176 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=583dad03a4c486407143b2fa31042148953bda62 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_dashboard_e2e.sh'

pass 6884388 2022-06-17 14:00:38 2022-06-17 14:05:46 2022-06-17 14:46:08 0:40:22 0:27:33 0:12:49 smithi main ubuntu 20.04 rados/monthrash/{ceph clusters/9-mons mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/filestore-xfs rados supported-random-distro$/{ubuntu_latest} thrashers/force-sync-many workloads/rados_mon_workunits} 2
fail 6884389 2022-06-17 14:00:40 2022-06-17 14:07:17 2022-06-17 14:34:53 0:27:36 0:20:51 0:06:45 smithi main centos 8.stream rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.stream_container_tools 1-bootstrap/16.2.4 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
Failure Reason:

Command failed on smithi117 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image docker.io/ceph/ceph:v16.2.4 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 86140d56-ee48-11ec-8427-001a4aab830c -e sha1=583dad03a4c486407143b2fa31042148953bda62 -- bash -c \'ceph versions | jq -e \'"\'"\'.overall | length == 1\'"\'"\'\''

pass 6884390 2022-06-17 14:00:41 2022-06-17 14:07:47 2022-06-17 14:31:20 0:23:33 0:13:39 0:09:54 smithi main ubuntu 18.04 rados/cephadm/orchestrator_cli/{0-random-distro$/{ubuntu_18.04} 2-node-mgr orchestrator_cli} 2
pass 6884391 2022-06-17 14:00:43 2022-06-17 14:07:48 2022-06-17 14:34:37 0:26:49 0:14:56 0:11:53 smithi main ubuntu 20.04 rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/many msgr/async objectstore/bluestore-hybrid rados supported-random-distro$/{ubuntu_latest} tasks/rados_python} 2
pass 6884392 2022-06-17 14:00:44 2022-06-17 14:09:19 2022-06-17 14:32:17 0:22:58 0:16:19 0:06:39 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/connectivity task/test_cephadm} 1
fail 6884393 2022-06-17 14:00:45 2022-06-17 14:09:19 2022-06-17 14:41:47 0:32:28 0:21:25 0:11:03 smithi main ubuntu 18.04 rados/rook/smoke/{0-distro/ubuntu_18.04 0-kubeadm 1-rook 2-workload/radosbench 3-final cluster/1-node k8s/1.21 net/calico rook/1.6.2} 1
Failure Reason:

'wait for operator' reached maximum tries (90) after waiting for 900 seconds

fail 6884394 2022-06-17 14:00:47 2022-06-17 14:10:00 2022-06-17 16:24:28 2:14:28 2:07:51 0:06:37 smithi main centos 8.stream rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/normal mon_election/connectivity msgr-failures/few objectstore/filestore-xfs rados recovery-overrides/{more-async-recovery} supported-random-distro$/{centos_8} thrashers/fastread thrashosds-health workloads/ec-radosbench} 2
Failure Reason:

reached maximum tries (800) after waiting for 4800 seconds

pass 6884395 2022-06-17 14:00:49 2022-06-17 14:10:01 2022-06-17 14:33:51 0:23:50 0:16:23 0:07:27 smithi main centos 8.stream rados/mgr/{clusters/{2-node-mgr} debug/mgr mgr_ttl_cache/disable mon_election/classic objectstore/bluestore-low-osd-mem-target supported-random-distro$/{centos_8} tasks/insights} 2
fail 6884396 2022-06-17 14:00:50 2022-06-17 14:10:31 2022-06-17 14:40:07 0:29:36 0:16:38 0:12:58 smithi main ubuntu 18.04 rados/cephadm/osds/{0-distro/ubuntu_18.04 0-nvme-loop 1-start 2-ops/rm-zap-add} 2
Failure Reason:

Command failed on smithi088 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:583dad03a4c486407143b2fa31042148953bda62 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 802d1422-ee49-11ec-8427-001a4aab830c -- bash -c \'set -e\nset -x\nceph orch ps\nceph orch device ls\nDEVID=$(ceph device ls | grep osd.1 | awk \'"\'"\'{print $1}\'"\'"\')\nHOST=$(ceph orch device ls | grep $DEVID | awk \'"\'"\'{print $1}\'"\'"\')\nDEV=$(ceph orch device ls | grep $DEVID | awk \'"\'"\'{print $2}\'"\'"\')\necho "host $HOST, dev $DEV, devid $DEVID"\nceph orch osd rm 1\nwhile ceph orch osd rm status | grep ^1 ; do sleep 5 ; done\nceph orch device zap $HOST $DEV --force\nceph orch daemon add osd $HOST:$DEV\nwhile ! ceph osd dump | grep osd.1 | grep up ; do sleep 5 ; done\n\''

fail 6884397 2022-06-17 14:00:52 2022-06-17 14:11:52 2022-06-17 14:41:01 0:29:09 0:22:31 0:06:38 smithi main centos 8.stream rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.stream_container_tools 1-bootstrap/octopus 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
Failure Reason:

Command failed on smithi062 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image docker.io/ceph/ceph:v15 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 1ca30876-ee49-11ec-8427-001a4aab830c -e sha1=583dad03a4c486407143b2fa31042148953bda62 -- bash -c \'ceph versions | jq -e \'"\'"\'.overall | length == 1\'"\'"\'\''

fail 6884398 2022-06-17 14:00:53 2022-06-17 14:11:53 2022-06-17 14:28:58 0:17:05 smithi main ubuntu 18.04 rados/upgrade/nautilus-x-singleton/{0-cluster/{openstack start} 1-install/nautilus 2-partial-upgrade/firsthalf 3-thrash/default 4-workload/{rbd-cls rbd-import-export readwrite snaps-few-objects} 5-workload/{radosbench rbd_api} 6-finish-upgrade 7-pacific 8-workload/{rbd-python snaps-many-objects} bluestore-bitmap mon_election/connectivity thrashosds-health ubuntu_18.04} 4
Failure Reason:

Cannot connect to remote host smithi041

pass 6884399 2022-06-17 14:00:55 2022-06-17 14:15:34 2022-06-17 16:33:38 2:18:04 2:10:36 0:07:28 smithi main centos 8.stream rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-comp-zstd rados tasks/rados_cls_all validater/valgrind} 2
fail 6884400 2022-06-17 14:00:57 2022-06-17 14:16:24 2022-06-17 14:45:41 0:29:17 0:21:34 0:07:43 smithi main centos 8.stream rados/cephadm/dashboard/{0-distro/centos_8.stream_container_tools task/test_e2e} 2
Failure Reason:

Command failed (workunit test cephadm/test_dashboard_e2e.sh) on smithi110 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=583dad03a4c486407143b2fa31042148953bda62 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_dashboard_e2e.sh'

fail 6884401 2022-06-17 14:00:58 2022-06-17 14:16:55 2022-06-17 14:46:08 0:29:13 0:21:51 0:07:22 smithi main centos 8.stream rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.stream_container_tools 1-bootstrap/16.2.4 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
Failure Reason:

Command failed on smithi026 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image docker.io/ceph/ceph:v16.2.4 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d4fde300-ee49-11ec-8427-001a4aab830c -e sha1=583dad03a4c486407143b2fa31042148953bda62 -- bash -c \'ceph versions | jq -e \'"\'"\'.overall | length == 1\'"\'"\'\''

fail 6884402 2022-06-17 14:01:00 2022-06-17 14:17:25 2022-06-17 14:35:49 0:18:24 0:07:58 0:10:26 smithi main ubuntu 20.04 rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 1-rook 2-workload/none 3-final cluster/3-node k8s/1.21 net/calico rook/master} 3
Failure Reason:

[Errno 2] Cannot find file on the remote 'ubuntu@smithi033.front.sepia.ceph.com': 'rook/cluster/examples/kubernetes/ceph/operator.yaml'

fail 6884403 2022-06-17 14:01:02 2022-06-17 14:18:36 2022-06-17 14:42:19 0:23:43 0:15:55 0:07:48 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/connectivity task/test_cephadm} 1
Failure Reason:

SELinux denials found on ubuntu@smithi158.front.sepia.ceph.com: ['type=AVC msg=audit(1655476744.922:18128): avc: denied { ioctl } for pid=118324 comm="iptables" path="/var/lib/containers/storage/overlay/4823b22f30c11558be24dff7a896f66f89150b9e95e316118835225ad8c6693f/merged" dev="overlay" ino=3803116 scontext=system_u:system_r:iptables_t:s0 tcontext=system_u:object_r:container_file_t:s0:c1022,c1023 tclass=dir permissive=1']

pass 6884404 2022-06-17 14:01:04 2022-06-17 14:18:36 2022-06-17 15:03:39 0:45:03 0:34:41 0:10:22 smithi main ubuntu 18.04 rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/nautilus-v2only backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{ubuntu_18.04} mon_election/connectivity msgr-failures/osd-delay rados thrashers/default thrashosds-health workloads/test_rbd_api} 3
fail 6884405 2022-06-17 14:01:06 2022-06-17 14:20:17 2022-06-17 14:51:14 0:30:57 0:22:10 0:08:47 smithi main centos 8.stream rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.stream_container_tools 1-bootstrap/octopus 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
Failure Reason:

Command failed on smithi053 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image docker.io/ceph/ceph:v15 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 8e5b3d5c-ee4a-11ec-8427-001a4aab830c -e sha1=583dad03a4c486407143b2fa31042148953bda62 -- bash -c \'ceph versions | jq -e \'"\'"\'.overall | length == 1\'"\'"\'\''

fail 6884406 2022-06-17 14:01:07 2022-06-17 14:22:18 2022-06-17 14:59:45 0:37:27 0:27:59 0:09:28 smithi main ubuntu 18.04 rados/rook/smoke/{0-distro/ubuntu_18.04 0-kubeadm 1-rook 2-workload/radosbench 3-final cluster/3-node k8s/1.21 net/calico rook/1.6.2} 3
Failure Reason:

'check osd count' reached maximum tries (90) after waiting for 900 seconds