User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Pass | Fail |
---|---|---|---|---|---|---|---|---|---|---|
yuriw | 2022-06-16 21:28:50 | 2022-06-17 01:15:21 | 2022-06-17 02:30:29 | 1:15:08 | rados | wip-yuri3-testing-2022-06-15-0732-pacific | smithi | f6079ba | 12 | 16 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
pass | 6883276 | 2022-06-16 21:31:40 | 2022-06-17 01:15:20 | 2022-06-17 01:38:59 | 0:23:39 | 0:16:46 | 0:06:53 | smithi | main | centos | 8.stream | rados/mgr/{clusters/{2-node-mgr} debug/mgr mgr_ttl_cache/disable mon_election/classic objectstore/bluestore-comp-snappy supported-random-distro$/{centos_8} tasks/prometheus} | 2 | |
pass | 6883277 | 2022-06-16 21:31:42 | 2022-06-17 01:15:21 | 2022-06-17 02:04:24 | 0:49:03 | 0:42:54 | 0:06:09 | smithi | main | centos | 8.stream | rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/connectivity msgr-failures/osd-dispatch-delay objectstore/filestore-xfs rados recovery-overrides/{more-async-recovery} supported-random-distro$/{centos_8} thrashers/morepggrow thrashosds-health workloads/ec-rados-plugin=jerasure-k=4-m=2} | 3 | |
fail | 6883278 | 2022-06-16 21:31:44 | 2022-06-17 01:15:22 | 2022-06-17 01:36:43 | 0:21:21 | 0:09:16 | 0:12:05 | smithi | main | ubuntu | 20.04 | rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 1-rook 2-workload/radosbench 3-final cluster/1-node k8s/1.21 net/calico rook/master} | 1 | |
Failure Reason:
[Errno 2] Cannot find file on the remote 'ubuntu@smithi090.front.sepia.ceph.com': 'rook/cluster/examples/kubernetes/ceph/operator.yaml' |
||||||||||||||
pass | 6883279 | 2022-06-16 21:31:46 | 2022-06-17 01:16:32 | 2022-06-17 01:39:01 | 0:22:29 | 0:16:10 | 0:06:19 | smithi | main | centos | 8.stream | rados/rest/{mgr-restful supported-random-distro$/{centos_8}} | 1 | |
fail | 6883280 | 2022-06-16 21:31:48 | 2022-06-17 01:16:33 | 2022-06-17 01:46:10 | 0:29:37 | 0:22:19 | 0:07:18 | smithi | main | centos | 8.stream | rados/cephadm/dashboard/{0-distro/centos_8.stream_container_tools task/test_e2e} | 2 | |
Failure Reason:
Command failed (workunit test cephadm/test_dashboard_e2e.sh) on smithi115 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=f6079ba81bc217b94b789d0e84a74d7c92ef6876 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_dashboard_e2e.sh' |
||||||||||||||
fail | 6883281 | 2022-06-16 21:31:50 | 2022-06-17 01:16:54 | 2022-06-17 01:46:27 | 0:29:33 | 0:21:14 | 0:08:19 | smithi | main | centos | 8.stream | rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.stream_container_tools 1-bootstrap/16.2.4 1-start 2-nfs 3-upgrade-with-workload 4-final} | 2 | |
Failure Reason:
Command failed on smithi111 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image docker.io/ceph/ceph:v16.2.4 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 3d9ec248-eddd-11ec-8427-001a4aab830c -e sha1=f6079ba81bc217b94b789d0e84a74d7c92ef6876 -- bash -c \'ceph versions | jq -e \'"\'"\'.overall | length == 1\'"\'"\'\'' |
||||||||||||||
fail | 6883282 | 2022-06-16 21:31:52 | 2022-06-17 01:18:55 | 2022-06-17 01:41:57 | 0:23:02 | 0:16:09 | 0:06:53 | smithi | main | centos | 8.stream | rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/connectivity task/test_cephadm} | 1 | |
Failure Reason:
SELinux denials found on ubuntu@smithi073.front.sepia.ceph.com: ['type=AVC msg=audit(1655429936.937:18175): avc: denied { ioctl } for pid=118213 comm="iptables" path="/var/lib/containers/storage/overlay/ec9392cee485eaf6044468576731523002b5c451cfa32ebce52c27757b56aa18/merged" dev="overlay" ino=3803093 scontext=system_u:system_r:iptables_t:s0 tcontext=system_u:object_r:container_file_t:s0:c1022,c1023 tclass=dir permissive=1'] |
||||||||||||||
fail | 6883283 | 2022-06-16 21:31:54 | 2022-06-17 01:18:55 | 2022-06-17 01:50:11 | 0:31:16 | 0:21:54 | 0:09:22 | smithi | main | ubuntu | 18.04 | rados/rook/smoke/{0-distro/ubuntu_18.04 0-kubeadm 1-rook 2-workload/radosbench 3-final cluster/1-node k8s/1.21 net/calico rook/1.6.2} | 1 | |
Failure Reason:
'wait for operator' reached maximum tries (90) after waiting for 900 seconds |
||||||||||||||
fail | 6883284 | 2022-06-16 21:31:56 | 2022-06-17 01:18:56 | 2022-06-17 01:44:21 | 0:25:25 | 0:16:17 | 0:09:08 | smithi | main | ubuntu | 18.04 | rados/cephadm/osds/{0-distro/ubuntu_18.04 0-nvme-loop 1-start 2-ops/rm-zap-add} | 2 | |
Failure Reason:
Command failed on smithi019 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:f6079ba81bc217b94b789d0e84a74d7c92ef6876 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 5a8e8f46-eddd-11ec-8427-001a4aab830c -- bash -c \'set -e\nset -x\nceph orch ps\nceph orch device ls\nDEVID=$(ceph device ls | grep osd.1 | awk \'"\'"\'{print $1}\'"\'"\')\nHOST=$(ceph orch device ls | grep $DEVID | awk \'"\'"\'{print $1}\'"\'"\')\nDEV=$(ceph orch device ls | grep $DEVID | awk \'"\'"\'{print $2}\'"\'"\')\necho "host $HOST, dev $DEV, devid $DEVID"\nceph orch osd rm 1\nwhile ceph orch osd rm status | grep ^1 ; do sleep 5 ; done\nceph orch device zap $HOST $DEV --force\nceph orch daemon add osd $HOST:$DEV\nwhile ! ceph osd dump | grep osd.1 | grep up ; do sleep 5 ; done\n\'' |
||||||||||||||
fail | 6883285 | 2022-06-16 21:31:59 | 2022-06-17 01:19:07 | 2022-06-17 01:44:11 | 0:25:04 | 0:16:03 | 0:09:01 | smithi | main | centos | 8.stream | rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/classic task/test_cephadm} | 1 | |
Failure Reason:
SELinux denials found on ubuntu@smithi204.front.sepia.ceph.com: ['type=AVC msg=audit(1655430061.251:18183): avc: denied { ioctl } for pid=118156 comm="iptables" path="/var/lib/containers/storage/overlay/07d5e21438e66a7832b534aba08f1ed4e0a5cd4f9452c5395377df1197062bd1/merged" dev="overlay" ino=3803128 scontext=system_u:system_r:iptables_t:s0 tcontext=system_u:object_r:container_file_t:s0:c1022,c1023 tclass=dir permissive=1'] |
||||||||||||||
pass | 6883286 | 2022-06-16 21:32:01 | 2022-06-17 01:20:48 | 2022-06-17 02:00:04 | 0:39:16 | 0:32:27 | 0:06:49 | smithi | main | centos | 8.stream | rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{pg-warn whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-inline/yes 4-verify} 2-client 3-upgrade-with-workload 4-verify}} | 2 | |
fail | 6883287 | 2022-06-16 21:32:03 | 2022-06-17 01:20:49 | 2022-06-17 01:50:57 | 0:30:08 | 0:22:53 | 0:07:15 | smithi | main | centos | 8.stream | rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.stream_container_tools 1-bootstrap/octopus 1-start 2-nfs 3-upgrade-with-workload 4-final} | 2 | |
Failure Reason:
Command failed on smithi027 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image docker.io/ceph/ceph:v15 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid a1514cf2-eddd-11ec-8427-001a4aab830c -e sha1=f6079ba81bc217b94b789d0e84a74d7c92ef6876 -- bash -c \'ceph versions | jq -e \'"\'"\'.overall | length == 1\'"\'"\'\'' |
||||||||||||||
pass | 6883288 | 2022-06-16 21:32:05 | 2022-06-17 01:20:50 | 2022-06-17 02:04:40 | 0:43:50 | 0:33:40 | 0:10:10 | smithi | main | ubuntu | 20.04 | rados/mgr/{clusters/{2-node-mgr} debug/mgr mgr_ttl_cache/enable mon_election/connectivity objectstore/bluestore-stupid supported-random-distro$/{ubuntu_latest} tasks/module_selftest} | 2 | |
fail | 6883289 | 2022-06-16 21:32:07 | 2022-06-17 01:21:11 | 2022-06-17 01:39:28 | 0:18:17 | smithi | main | ubuntu | 18.04 | rados/cephadm/smoke-roleless/{0-distro/ubuntu_18.04 0-nvme-loop 1-start 2-services/nfs-ingress2 3-final} | 2 | |||
Failure Reason:
Cannot connect to remote host smithi097 |
||||||||||||||
fail | 6883290 | 2022-06-16 21:32:09 | 2022-06-17 01:21:32 | 2022-06-17 01:50:18 | 0:28:46 | 0:22:25 | 0:06:21 | smithi | main | centos | 8.stream | rados/cephadm/dashboard/{0-distro/centos_8.stream_container_tools task/test_e2e} | 2 | |
Failure Reason:
Command failed (workunit test cephadm/test_dashboard_e2e.sh) on smithi022 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=f6079ba81bc217b94b789d0e84a74d7c92ef6876 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_dashboard_e2e.sh' |
||||||||||||||
fail | 6883291 | 2022-06-16 21:32:12 | 2022-06-17 01:21:33 | 2022-06-17 01:49:14 | 0:27:41 | 0:21:14 | 0:06:27 | smithi | main | centos | 8.stream | rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.stream_container_tools 1-bootstrap/16.2.4 1-start 2-nfs 3-upgrade-with-workload 4-final} | 2 | |
Failure Reason:
Command failed on smithi071 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image docker.io/ceph/ceph:v16.2.4 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid a4f5f0a6-eddd-11ec-8427-001a4aab830c -e sha1=f6079ba81bc217b94b789d0e84a74d7c92ef6876 -- bash -c \'ceph versions | jq -e \'"\'"\'.overall | length == 1\'"\'"\'\'' |
||||||||||||||
fail | 6883292 | 2022-06-16 21:32:14 | 2022-06-17 01:22:15 | 2022-06-17 01:41:22 | 0:19:07 | 0:08:32 | 0:10:35 | smithi | main | ubuntu | 20.04 | rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 1-rook 2-workload/none 3-final cluster/3-node k8s/1.21 net/calico rook/master} | 3 | |
Failure Reason:
[Errno 2] Cannot find file on the remote 'ubuntu@smithi110.front.sepia.ceph.com': 'rook/cluster/examples/kubernetes/ceph/operator.yaml' |
||||||||||||||
fail | 6883293 | 2022-06-16 21:32:15 | 2022-06-17 01:23:36 | 2022-06-17 01:46:46 | 0:23:10 | 0:16:19 | 0:06:51 | smithi | main | centos | 8.stream | rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/connectivity task/test_cephadm} | 1 | |
Failure Reason:
SELinux denials found on ubuntu@smithi092.front.sepia.ceph.com: ['type=AVC msg=audit(1655430228.760:18131): avc: denied { ioctl } for pid=118142 comm="iptables" path="/var/lib/containers/storage/overlay/f7c8112a153d42c92cd9977bb04ccc7b537d728c27584eaf3603bb1b7b4dc294/merged" dev="overlay" ino=3803128 scontext=system_u:system_r:iptables_t:s0 tcontext=system_u:object_r:container_file_t:s0:c1022,c1023 tclass=dir permissive=1'] |
||||||||||||||
pass | 6883294 | 2022-06-16 21:32:18 | 2022-06-17 01:23:36 | 2022-06-17 02:01:40 | 0:38:04 | 0:31:04 | 0:07:00 | smithi | main | rhel | 8.4 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-partial-recovery} 3-scrub-overrides/{default} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/osd-dispatch-delay msgr/async objectstore/bluestore-hybrid rados supported-random-distro$/{rhel_8} thrashers/mapgap thrashosds-health workloads/redirect_promote_tests} | 2 | |
pass | 6883295 | 2022-06-16 21:32:20 | 2022-06-17 01:23:47 | 2022-06-17 01:51:19 | 0:27:32 | 0:17:05 | 0:10:27 | smithi | main | ubuntu | 18.04 | rados/cephadm/smoke-roleless/{0-distro/ubuntu_18.04 0-nvme-loop 1-start 2-services/basic 3-final} | 2 | |
pass | 6883296 | 2022-06-16 21:32:22 | 2022-06-17 01:25:08 | 2022-06-17 02:30:29 | 1:05:21 | 0:57:44 | 0:07:37 | smithi | main | centos | 8.stream | rados/cephadm/thrash/{0-distro/centos_8.stream_container_tools 1-start 2-thrash 3-tasks/radosbench fixed-2 msgr/async-v1only root} | 2 | |
pass | 6883297 | 2022-06-16 21:32:24 | 2022-06-17 01:25:28 | 2022-06-17 01:52:54 | 0:27:26 | 0:17:53 | 0:09:33 | smithi | main | ubuntu | 18.04 | rados/cephadm/osds/{0-distro/ubuntu_18.04 0-nvme-loop 1-start 2-ops/rm-zap-flag} | 2 | |
pass | 6883298 | 2022-06-16 21:32:26 | 2022-06-17 01:25:29 | 2022-06-17 01:49:23 | 0:23:54 | 0:16:37 | 0:07:17 | smithi | main | centos | 8.stream | rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/classic task/test_cephadm} | 1 | |
pass | 6883299 | 2022-06-16 21:32:28 | 2022-06-17 01:25:49 | 2022-06-17 02:05:30 | 0:39:41 | 0:32:26 | 0:07:15 | smithi | main | centos | 8.stream | rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{pg-warn whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-inline/yes 4-verify} 2-client 3-upgrade-with-workload 4-verify}} | 2 | |
fail | 6883300 | 2022-06-16 21:32:30 | 2022-06-17 01:26:20 | 2022-06-17 01:45:34 | 0:19:14 | smithi | main | ubuntu | 18.04 | rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/nautilus-v2only backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{ubuntu_18.04} mon_election/connectivity msgr-failures/osd-delay rados thrashers/default thrashosds-health workloads/test_rbd_api} | 3 | |||
Failure Reason:
Cannot connect to remote host smithi066 |
||||||||||||||
fail | 6883301 | 2022-06-16 21:32:32 | 2022-06-17 01:26:30 | 2022-06-17 01:56:57 | 0:30:27 | 0:22:31 | 0:07:56 | smithi | main | centos | 8.stream | rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.stream_container_tools 1-bootstrap/octopus 1-start 2-nfs 3-upgrade-with-workload 4-final} | 2 | |
Failure Reason:
Command failed on smithi114 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image docker.io/ceph/ceph:v15 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 7ad381de-edde-11ec-8427-001a4aab830c -e sha1=f6079ba81bc217b94b789d0e84a74d7c92ef6876 -- bash -c \'ceph versions | jq -e \'"\'"\'.overall | length == 1\'"\'"\'\'' |
||||||||||||||
pass | 6883302 | 2022-06-16 21:32:35 | 2022-06-17 01:27:41 | 2022-06-17 02:07:19 | 0:39:38 | 0:32:33 | 0:07:05 | smithi | main | centos | 8.stream | rados/cephadm/thrash/{0-distro/centos_8.stream_container_tools 1-start 2-thrash 3-tasks/small-objects fixed-2 msgr/async-v2only root} | 2 | |
fail | 6883303 | 2022-06-16 21:32:37 | 2022-06-17 01:28:22 | 2022-06-17 02:09:00 | 0:40:38 | 0:29:07 | 0:11:31 | smithi | main | ubuntu | 18.04 | rados/rook/smoke/{0-distro/ubuntu_18.04 0-kubeadm 1-rook 2-workload/radosbench 3-final cluster/3-node k8s/1.21 net/calico rook/1.6.2} | 3 | |
Failure Reason:
'check osd count' reached maximum tries (90) after waiting for 900 seconds |