User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Pass | Fail |
---|---|---|---|---|---|---|---|---|---|---|
lflores | 2022-08-02 15:54:18 | 2022-08-02 15:56:33 | 2022-08-02 16:45:43 | 0:49:10 | rados | wip-yuri8-testing-2022-08-01-1413-pacific | smithi | 7061d0e | 4 | 11 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
pass | 6955511 | 2022-08-02 15:55:54 | 2022-08-02 15:56:32 | 2022-08-02 16:17:53 | 0:21:21 | 0:15:14 | 0:06:07 | smithi | main | rhel | 8.4 | rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/classic msgr-failures/few objectstore/bluestore-stupid rados recovery-overrides/{default} supported-random-distro$/{rhel_8} thrashers/careful thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} | 4 | |
fail | 6955512 | 2022-08-02 15:55:55 | 2022-08-02 15:56:33 | 2022-08-02 16:10:14 | 0:13:41 | 0:06:31 | 0:07:10 | smithi | main | ubuntu | 20.04 | rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 1-rook 2-workload/radosbench 3-final cluster/1-node k8s/1.21 net/calico rook/master} | 1 | |
Failure Reason:
[Errno 2] Cannot find file on the remote 'ubuntu@smithi139.front.sepia.ceph.com': 'rook/cluster/examples/kubernetes/ceph/operator.yaml' |
||||||||||||||
fail | 6955513 | 2022-08-02 15:55:56 | 2022-08-02 15:56:33 | 2022-08-02 16:08:02 | 0:11:29 | 0:04:51 | 0:06:38 | smithi | main | ubuntu | 20.04 | rados/dashboard/{centos_8.stream_container_tools clusters/{2-node-mgr} debug/mgr mon_election/classic random-objectstore$/{bluestore-hybrid} supported-random-distro$/{ubuntu_latest} tasks/dashboard} | 2 | |
Failure Reason:
Command failed on smithi132 with status 1: 'TESTDIR=/home/ubuntu/cephtest bash -s' |
||||||||||||||
fail | 6955514 | 2022-08-02 15:55:57 | 2022-08-02 15:56:33 | 2022-08-02 16:25:00 | 0:28:27 | 0:21:24 | 0:07:03 | smithi | main | centos | 8.stream | rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.stream_container_tools 1-bootstrap/16.2.4 1-start 2-nfs 3-upgrade-with-workload 4-final} | 2 | |
Failure Reason:
Command failed on smithi036 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image docker.io/ceph/ceph:v16.2.4 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 4b5d095c-127d-11ed-8430-001a4aab830c -e sha1=7061d0e7ffd836dc9c76dfa4c41c6ba60edca507 -- bash -c \'ceph versions | jq -e \'"\'"\'.overall | length == 1\'"\'"\'\'' |
||||||||||||||
pass | 6955515 | 2022-08-02 15:55:58 | 2022-08-02 15:56:33 | 2022-08-02 16:45:43 | 0:49:10 | 0:42:12 | 0:06:58 | smithi | main | centos | 8.stream | rados/cephadm/thrash/{0-distro/centos_8.stream_container_tools 1-start 2-thrash 3-tasks/rados_api_tests fixed-2 msgr/async-v1only root} | 2 | |
fail | 6955516 | 2022-08-02 15:55:59 | 2022-08-02 15:56:34 | 2022-08-02 16:23:26 | 0:26:52 | 0:21:29 | 0:05:23 | smithi | main | ubuntu | 18.04 | rados/rook/smoke/{0-distro/ubuntu_18.04 0-kubeadm 1-rook 2-workload/radosbench 3-final cluster/1-node k8s/1.21 net/calico rook/1.6.2} | 1 | |
Failure Reason:
'wait for operator' reached maximum tries (90) after waiting for 900 seconds |
||||||||||||||
pass | 6955517 | 2022-08-02 15:56:00 | 2022-08-02 15:56:34 | 2022-08-02 16:19:27 | 0:22:53 | 0:15:50 | 0:07:03 | smithi | main | centos | 8.stream | rados/singleton-nomsgr/{all/msgr mon_election/connectivity rados supported-random-distro$/{centos_8}} | 1 | |
fail | 6955518 | 2022-08-02 15:56:02 | 2022-08-02 15:56:34 | 2022-08-02 16:20:09 | 0:23:35 | 0:16:06 | 0:07:29 | smithi | main | ubuntu | 18.04 | rados/cephadm/osds/{0-distro/ubuntu_18.04 0-nvme-loop 1-start 2-ops/rm-zap-add} | 2 | |
Failure Reason:
Command failed on smithi055 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:7061d0e7ffd836dc9c76dfa4c41c6ba60edca507 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 2881f12c-127d-11ed-8430-001a4aab830c -- bash -c \'set -e\nset -x\nceph orch ps\nceph orch device ls\nDEVID=$(ceph device ls | grep osd.1 | awk \'"\'"\'{print $1}\'"\'"\')\nHOST=$(ceph orch device ls | grep $DEVID | awk \'"\'"\'{print $1}\'"\'"\')\nDEV=$(ceph orch device ls | grep $DEVID | awk \'"\'"\'{print $2}\'"\'"\')\necho "host $HOST, dev $DEV, devid $DEVID"\nceph orch osd rm 1\nwhile ceph orch osd rm status | grep ^1 ; do sleep 5 ; done\nceph orch device zap $HOST $DEV --force\nceph orch daemon add osd $HOST:$DEV\nwhile ! ceph osd dump | grep osd.1 | grep up ; do sleep 5 ; done\n\'' |
||||||||||||||
pass | 6955519 | 2022-08-02 15:56:03 | 2022-08-02 15:56:35 | 2022-08-02 16:21:24 | 0:24:49 | 0:16:03 | 0:08:46 | smithi | main | centos | 8.stream | rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/classic task/test_cephadm} | 1 | |
fail | 6955520 | 2022-08-02 15:56:04 | 2022-08-02 15:56:35 | 2022-08-02 16:26:31 | 0:29:56 | 0:22:03 | 0:07:53 | smithi | main | centos | 8.stream | rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.stream_container_tools 1-bootstrap/octopus 1-start 2-nfs 3-upgrade-with-workload 4-final} | 2 | |
Failure Reason:
Command failed on smithi073 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image docker.io/ceph/ceph:v15 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 7bf2f072-127d-11ed-8430-001a4aab830c -e sha1=7061d0e7ffd836dc9c76dfa4c41c6ba60edca507 -- bash -c \'ceph versions | jq -e \'"\'"\'.overall | length == 1\'"\'"\'\'' |
||||||||||||||
fail | 6955521 | 2022-08-02 15:56:05 | 2022-08-02 15:56:35 | 2022-08-02 16:25:43 | 0:29:08 | 0:21:49 | 0:07:19 | smithi | main | centos | 8.stream | rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.stream_container_tools 1-bootstrap/16.2.4 1-start 2-nfs 3-upgrade-with-workload 4-final} | 2 | |
Failure Reason:
Command failed on smithi059 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image docker.io/ceph/ceph:v16.2.4 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 62e09936-127d-11ed-8430-001a4aab830c -e sha1=7061d0e7ffd836dc9c76dfa4c41c6ba60edca507 -- bash -c \'ceph versions | jq -e \'"\'"\'.overall | length == 1\'"\'"\'\'' |
||||||||||||||
fail | 6955522 | 2022-08-02 15:56:06 | 2022-08-02 15:56:36 | 2022-08-02 16:13:37 | 0:17:01 | 0:07:37 | 0:09:24 | smithi | main | ubuntu | 20.04 | rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 1-rook 2-workload/none 3-final cluster/3-node k8s/1.21 net/calico rook/master} | 3 | |
Failure Reason:
[Errno 2] Cannot find file on the remote 'ubuntu@smithi049.front.sepia.ceph.com': 'rook/cluster/examples/kubernetes/ceph/operator.yaml' |
||||||||||||||
fail | 6955523 | 2022-08-02 15:56:07 | 2022-08-02 15:56:36 | 2022-08-02 16:19:13 | 0:22:37 | 0:16:23 | 0:06:14 | smithi | main | centos | 8.stream | rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/classic task/test_cephadm} | 1 | |
Failure Reason:
SELinux denials found on ubuntu@smithi047.front.sepia.ceph.com: ['type=AVC msg=audit(1659456988.057:18324): avc: denied { ioctl } for pid=120080 comm="iptables" path="/var/lib/containers/storage/overlay/97da1d9728d2ef59562b1c513cb8bffab451bdb6c9db2eafcc778d988b88220b/merged" dev="overlay" ino=3805367 scontext=system_u:system_r:iptables_t:s0 tcontext=system_u:object_r:container_file_t:s0:c1022,c1023 tclass=dir permissive=1'] |
||||||||||||||
fail | 6955524 | 2022-08-02 15:56:08 | 2022-08-02 15:56:36 | 2022-08-02 16:25:52 | 0:29:16 | 0:22:32 | 0:06:44 | smithi | main | centos | 8.stream | rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.stream_container_tools 1-bootstrap/octopus 1-start 2-nfs 3-upgrade-with-workload 4-final} | 2 | |
Failure Reason:
Command failed on smithi170 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image docker.io/ceph/ceph:v15 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 65490c8a-127d-11ed-8430-001a4aab830c -e sha1=7061d0e7ffd836dc9c76dfa4c41c6ba60edca507 -- bash -c \'ceph versions | jq -e \'"\'"\'.overall | length == 1\'"\'"\'\'' |
||||||||||||||
fail | 6955525 | 2022-08-02 15:56:09 | 2022-08-02 15:56:37 | 2022-08-02 16:32:05 | 0:35:28 | 0:28:13 | 0:07:15 | smithi | main | ubuntu | 18.04 | rados/rook/smoke/{0-distro/ubuntu_18.04 0-kubeadm 1-rook 2-workload/radosbench 3-final cluster/3-node k8s/1.21 net/calico rook/1.6.2} | 3 | |
Failure Reason:
'check osd count' reached maximum tries (90) after waiting for 900 seconds |