User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Pass | Fail | Dead |
---|---|---|---|---|---|---|---|---|---|---|---|
yuriw | 2022-08-15 13:44:27 | 2022-08-15 15:03:45 | 2022-08-15 17:16:07 | 2:12:22 | rados | wip-yuri3-testing-2022-08-11-0809-pacific | smithi | eb4319a | 12 | 14 | 1 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
pass | 6973469 | 2022-08-15 13:46:18 | 2022-08-15 15:03:44 | 2022-08-15 15:29:29 | 0:25:45 | 0:19:04 | 0:06:41 | smithi | main | ubuntu | 18.04 | rados/cephadm/smoke/{0-nvme-loop distro/ubuntu_18.04 fixed-2 mon_election/classic start} | 2 | |
fail | 6973470 | 2022-08-15 13:46:19 | 2022-08-15 15:03:44 | 2022-08-15 15:17:24 | 0:13:40 | 0:06:36 | 0:07:04 | smithi | main | ubuntu | 20.04 | rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 1-rook 2-workload/radosbench 3-final cluster/1-node k8s/1.21 net/calico rook/master} | 1 | |
Failure Reason:
[Errno 2] Cannot find file on the remote 'ubuntu@smithi152.front.sepia.ceph.com': 'rook/cluster/examples/kubernetes/ceph/operator.yaml' |
||||||||||||||
fail | 6973471 | 2022-08-15 13:46:20 | 2022-08-15 15:03:45 | 2022-08-15 15:29:04 | 0:25:19 | 0:19:05 | 0:06:14 | smithi | main | centos | 8.stream | rados/cephadm/dashboard/{0-distro/centos_8.stream_container_tools task/test_e2e} | 2 | |
Failure Reason:
Command failed (workunit test cephadm/test_dashboard_e2e.sh) on smithi131 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=eb4319a2b19ca3fba01742173e97dd5b50b2f291 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_dashboard_e2e.sh' |
||||||||||||||
fail | 6973472 | 2022-08-15 13:46:21 | 2022-08-15 15:03:45 | 2022-08-15 15:29:09 | 0:25:24 | 0:17:53 | 0:07:31 | smithi | main | centos | 8.stream | rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.stream_container_tools 1-bootstrap/16.2.4 1-start 2-nfs 3-upgrade-with-workload 4-final} | 2 | |
Failure Reason:
Command failed on smithi137 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image docker.io/ceph/ceph:v16.2.4 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid bd1a459e-1cac-11ed-8431-001a4aab830c -e sha1=eb4319a2b19ca3fba01742173e97dd5b50b2f291 -- bash -c \'ceph versions | jq -e \'"\'"\'.overall | length == 1\'"\'"\'\'' |
||||||||||||||
pass | 6973473 | 2022-08-15 13:46:22 | 2022-08-15 15:04:16 | 2022-08-15 15:49:37 | 0:45:21 | 0:32:51 | 0:12:30 | smithi | main | ubuntu | 18.04 | rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/nautilus backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{ubuntu_18.04} mon_election/classic msgr-failures/osd-delay rados thrashers/morepggrow thrashosds-health workloads/test_rbd_api} | 3 | |
fail | 6973474 | 2022-08-15 13:46:23 | 2022-08-15 15:10:07 | 2022-08-15 15:32:06 | 0:21:59 | 0:13:16 | 0:08:43 | smithi | main | centos | 8.stream | rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/connectivity task/test_cephadm} | 1 | |
Failure Reason:
SELinux denials found on ubuntu@smithi084.front.sepia.ceph.com: ['type=AVC msg=audit(1660577300.482:6435): avc: denied { ioctl } for pid=59621 comm="iptables" path="/var/lib/containers/storage/overlay/6733ee33f83126cd6b041877613147ccff5fdf27950775a6ff56af5cc083704d/merged" dev="overlay" ino=3805322 scontext=system_u:system_r:iptables_t:s0 tcontext=system_u:object_r:container_file_t:s0:c1022,c1023 tclass=dir permissive=1'] |
||||||||||||||
pass | 6973475 | 2022-08-15 13:46:25 | 2022-08-15 15:11:17 | 2022-08-15 16:11:04 | 0:59:47 | 0:53:08 | 0:06:39 | smithi | main | centos | 8.stream | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-bitmap rados tasks/rados_api_tests validater/valgrind} | 2 | |
pass | 6973476 | 2022-08-15 13:46:26 | 2022-08-15 15:11:58 | 2022-08-15 16:50:09 | 1:38:11 | 1:24:55 | 0:13:16 | smithi | main | ubuntu | 18.04 | rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/luminous-v1only backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{ubuntu_18.04} mon_election/classic msgr-failures/few rados thrashers/careful thrashosds-health workloads/radosbench} | 3 | |
pass | 6973477 | 2022-08-15 13:46:27 | 2022-08-15 15:18:39 | 2022-08-15 15:52:18 | 0:33:39 | 0:27:11 | 0:06:28 | smithi | main | centos | 8.stream | rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{pg-warn whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-inline/no 4-verify} 2-client 3-upgrade-with-workload 4-verify}} | 2 | |
fail | 6973478 | 2022-08-15 13:46:28 | 2022-08-15 15:19:09 | 2022-08-15 15:43:52 | 0:24:43 | 0:19:20 | 0:05:23 | smithi | main | ubuntu | 18.04 | rados/rook/smoke/{0-distro/ubuntu_18.04 0-kubeadm 1-rook 2-workload/radosbench 3-final cluster/1-node k8s/1.21 net/calico rook/1.6.2} | 1 | |
Failure Reason:
'wait for operator' reached maximum tries (90) after waiting for 900 seconds |
||||||||||||||
pass | 6973479 | 2022-08-15 13:46:30 | 2022-08-15 15:19:10 | 2022-08-15 16:12:30 | 0:53:20 | 0:46:45 | 0:06:35 | smithi | main | ubuntu | 18.04 | rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/mimic-v1only backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{ubuntu_18.04} mon_election/classic msgr-failures/fastclose rados thrashers/mapgap thrashosds-health workloads/snaps-few-objects} | 3 | |
pass | 6973480 | 2022-08-15 13:46:31 | 2022-08-15 15:19:20 | 2022-08-15 15:41:57 | 0:22:37 | 0:14:18 | 0:08:19 | smithi | main | ubuntu | 18.04 | rados/cephadm/smoke-roleless/{0-distro/ubuntu_18.04 0-nvme-loop 1-start 2-services/iscsi 3-final} | 2 | |
fail | 6973481 | 2022-08-15 13:46:32 | 2022-08-15 15:20:30 | 2022-08-15 15:40:06 | 0:19:36 | 0:13:14 | 0:06:22 | smithi | main | ubuntu | 18.04 | rados/cephadm/osds/{0-distro/ubuntu_18.04 0-nvme-loop 1-start 2-ops/rm-zap-add} | 2 | |
Failure Reason:
Command failed on smithi145 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:eb4319a2b19ca3fba01742173e97dd5b50b2f291 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid edf0bade-1cae-11ed-8431-001a4aab830c -- bash -c \'set -e\nset -x\nceph orch ps\nceph orch device ls\nDEVID=$(ceph device ls | grep osd.1 | awk \'"\'"\'{print $1}\'"\'"\')\nHOST=$(ceph orch device ls | grep $DEVID | awk \'"\'"\'{print $1}\'"\'"\')\nDEV=$(ceph orch device ls | grep $DEVID | awk \'"\'"\'{print $2}\'"\'"\')\necho "host $HOST, dev $DEV, devid $DEVID"\nceph orch osd rm 1\nwhile ceph orch osd rm status | grep ^1 ; do sleep 5 ; done\nceph orch device zap $HOST $DEV --force\nceph orch daemon add osd $HOST:$DEV\nwhile ! ceph osd dump | grep osd.1 | grep up ; do sleep 5 ; done\n\'' |
||||||||||||||
dead | 6973482 | 2022-08-15 13:46:33 | 2022-08-15 15:21:01 | 2022-08-15 15:41:19 | 0:20:18 | smithi | main | centos | 8.stream | rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/classic task/test_cephadm} | 1 | |||
Failure Reason:
Error reimaging machines: reached maximum tries (100) after waiting for 600 seconds |
||||||||||||||
fail | 6973483 | 2022-08-15 13:46:34 | 2022-08-15 15:21:31 | 2022-08-15 15:48:02 | 0:26:31 | 0:18:45 | 0:07:46 | smithi | main | centos | 8.stream | rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.stream_container_tools 1-bootstrap/octopus 1-start 2-nfs 3-upgrade-with-workload 4-final} | 2 | |
Failure Reason:
Command failed on smithi121 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image docker.io/ceph/ceph:v15 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 5f214804-1caf-11ed-8431-001a4aab830c -e sha1=eb4319a2b19ca3fba01742173e97dd5b50b2f291 -- bash -c \'ceph versions | jq -e \'"\'"\'.overall | length == 1\'"\'"\'\'' |
||||||||||||||
pass | 6973484 | 2022-08-15 13:46:35 | 2022-08-15 15:23:12 | 2022-08-15 17:16:07 | 1:52:55 | 1:44:09 | 0:08:46 | smithi | main | ubuntu | 18.04 | rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/nautilus-v2only backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{ubuntu_18.04} mon_election/connectivity msgr-failures/fastclose rados thrashers/pggrow thrashosds-health workloads/radosbench} | 3 | |
pass | 6973485 | 2022-08-15 13:46:37 | 2022-08-15 15:24:02 | 2022-08-15 15:50:50 | 0:26:48 | 0:19:25 | 0:07:23 | smithi | main | ubuntu | 18.04 | rados/cephadm/smoke/{0-nvme-loop distro/ubuntu_18.04 fixed-2 mon_election/classic start} | 2 | |
fail | 6973487 | 2022-08-15 13:46:38 | 2022-08-15 15:25:34 | 2022-08-15 15:51:24 | 0:25:50 | 0:18:51 | 0:06:59 | smithi | main | centos | 8.stream | rados/cephadm/dashboard/{0-distro/centos_8.stream_container_tools task/test_e2e} | 2 | |
Failure Reason:
Command failed (workunit test cephadm/test_dashboard_e2e.sh) on smithi017 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=eb4319a2b19ca3fba01742173e97dd5b50b2f291 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_dashboard_e2e.sh' |
||||||||||||||
fail | 6973489 | 2022-08-15 13:46:39 | 2022-08-15 15:29:05 | 2022-08-15 15:54:11 | 0:25:06 | 0:17:46 | 0:07:20 | smithi | main | centos | 8.stream | rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.stream_container_tools 1-bootstrap/16.2.4 1-start 2-nfs 3-upgrade-with-workload 4-final} | 2 | |
Failure Reason:
Command failed on smithi137 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image docker.io/ceph/ceph:v16.2.4 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 39c4f302-1cb0-11ed-8431-001a4aab830c -e sha1=eb4319a2b19ca3fba01742173e97dd5b50b2f291 -- bash -c \'ceph versions | jq -e \'"\'"\'.overall | length == 1\'"\'"\'\'' |
||||||||||||||
fail | 6973491 | 2022-08-15 13:46:40 | 2022-08-15 15:29:36 | 2022-08-15 15:45:35 | 0:15:59 | 0:05:16 | 0:10:43 | smithi | main | ubuntu | 20.04 | rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 1-rook 2-workload/none 3-final cluster/3-node k8s/1.21 net/calico rook/master} | 3 | |
Failure Reason:
[Errno 2] Cannot find file on the remote 'ubuntu@smithi029.front.sepia.ceph.com': 'rook/cluster/examples/kubernetes/ceph/operator.yaml' |
||||||||||||||
fail | 6973493 | 2022-08-15 13:46:41 | 2022-08-15 15:33:47 | 2022-08-15 15:57:34 | 0:23:47 | 0:14:14 | 0:09:33 | smithi | main | centos | 8.stream | rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/connectivity task/test_cephadm} | 1 | |
Failure Reason:
SELinux denials found on ubuntu@smithi119.front.sepia.ceph.com: ['type=AVC msg=audit(1660578880.371:6431): avc: denied { ioctl } for pid=59630 comm="iptables" path="/var/lib/containers/storage/overlay/005f3936bf8c7a631b0ea12190458092e720f9a325b8288fd0b3c9c2d64e6d3f/merged" dev="overlay" ino=3805300 scontext=system_u:system_r:iptables_t:s0 tcontext=system_u:object_r:container_file_t:s0:c1022,c1023 tclass=dir permissive=1'] |
||||||||||||||
pass | 6973495 | 2022-08-15 13:46:43 | 2022-08-15 15:37:58 | 2022-08-15 16:15:16 | 0:37:18 | 0:28:00 | 0:09:18 | smithi | main | centos | 8.stream | rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{pg-warn whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-inline/no 4-verify} 2-client 3-upgrade-with-workload 4-verify}} | 2 | |
pass | 6973497 | 2022-08-15 13:46:44 | 2022-08-15 15:39:49 | 2022-08-15 15:56:43 | 0:16:54 | 0:10:19 | 0:06:35 | smithi | main | rhel | 8.4 | rados/singleton-nomsgr/{all/lazy_omap_stats_output mon_election/classic rados supported-random-distro$/{rhel_8}} | 1 | |
pass | 6973499 | 2022-08-15 13:46:45 | 2022-08-15 15:40:10 | 2022-08-15 16:44:07 | 1:03:57 | 0:50:07 | 0:13:50 | smithi | main | ubuntu | 18.04 | rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/nautilus-v1only backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{ubuntu_18.04} mon_election/classic msgr-failures/few rados thrashers/careful thrashosds-health workloads/snaps-few-objects} | 3 | |
fail | 6973501 | 2022-08-15 13:46:46 | 2022-08-15 15:45:41 | 2022-08-15 16:06:32 | 0:20:51 | 0:14:20 | 0:06:31 | smithi | main | centos | 8.stream | rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/classic task/test_cephadm} | 1 | |
Failure Reason:
SELinux denials found on ubuntu@smithi029.front.sepia.ceph.com: ['type=AVC msg=audit(1660579418.510:6432): avc: denied { ioctl } for pid=59757 comm="iptables" path="/var/lib/containers/storage/overlay/974c1624c69a05f6bb5a76d08b7c244f6b053bbd123e79b501787ed63e54235e/merged" dev="overlay" ino=3805335 scontext=system_u:system_r:iptables_t:s0 tcontext=system_u:object_r:container_file_t:s0:c1022,c1023 tclass=dir permissive=1', 'type=AVC msg=audit(1660579418.274:6427): avc: denied { ioctl } for pid=59719 comm="iptables" path="/var/lib/containers/storage/overlay/974c1624c69a05f6bb5a76d08b7c244f6b053bbd123e79b501787ed63e54235e/merged" dev="overlay" ino=3805335 scontext=system_u:system_r:iptables_t:s0 tcontext=system_u:object_r:container_file_t:s0:c1022,c1023 tclass=dir permissive=1'] |
||||||||||||||
fail | 6973503 | 2022-08-15 13:46:47 | 2022-08-15 15:48:12 | 2022-08-15 16:14:55 | 0:26:43 | 0:18:48 | 0:07:55 | smithi | main | centos | 8.stream | rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.stream_container_tools 1-bootstrap/octopus 1-start 2-nfs 3-upgrade-with-workload 4-final} | 2 | |
Failure Reason:
Command failed on smithi187 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image docker.io/ceph/ceph:v15 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 1f86acbc-1cb3-11ed-8431-001a4aab830c -e sha1=eb4319a2b19ca3fba01742173e97dd5b50b2f291 -- bash -c \'ceph versions | jq -e \'"\'"\'.overall | length == 1\'"\'"\'\'' |
||||||||||||||
fail | 6973505 | 2022-08-15 13:46:49 | 2022-08-15 15:49:43 | 2022-08-15 16:27:21 | 0:37:38 | 0:25:13 | 0:12:25 | smithi | main | ubuntu | 18.04 | rados/rook/smoke/{0-distro/ubuntu_18.04 0-kubeadm 1-rook 2-workload/radosbench 3-final cluster/3-node k8s/1.21 net/calico rook/1.6.2} | 3 | |
Failure Reason:
'check osd count' reached maximum tries (90) after waiting for 900 seconds |