User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Pass | Fail |
---|---|---|---|---|---|---|---|---|---|---|
yuriw | 2022-07-20 14:47:28 | 2022-07-21 07:16:43 | 2022-07-21 09:54:59 | 2:38:16 | rados | wip-yuri2-testing-2022-07-15-0755-pacific | smithi | af36a7f | 11 | 16 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
fail | 6940660 | 2022-07-20 14:49:19 | 2022-07-21 07:16:42 | 2022-07-21 07:33:36 | 0:16:54 | 0:07:36 | 0:09:18 | smithi | main | centos | 8.stream | rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/connectivity task/test_cephadm_repos} | 1 | |
Failure Reason:
Command failed (workunit test cephadm/test_repos.sh) on smithi114 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=b15d03d5395956e9279c4fe4db112ff61db696a0 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_repos.sh' |
||||||||||||||
pass | 6940661 | 2022-07-20 14:49:20 | 2022-07-21 07:16:42 | 2022-07-21 07:57:54 | 0:41:12 | 0:32:24 | 0:08:48 | smithi | main | centos | 8.stream | rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{pg-warn whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-inline/no 4-verify} 2-client 3-upgrade-with-workload 4-verify}} | 2 | |
fail | 6940662 | 2022-07-20 14:49:22 | 2022-07-21 07:16:43 | 2022-07-21 07:29:46 | 0:13:03 | 0:06:34 | 0:06:29 | smithi | main | ubuntu | 20.04 | rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 1-rook 2-workload/radosbench 3-final cluster/1-node k8s/1.21 net/calico rook/master} | 1 | |
Failure Reason:
[Errno 2] Cannot find file on the remote 'ubuntu@smithi183.front.sepia.ceph.com': 'rook/cluster/examples/kubernetes/ceph/operator.yaml' |
||||||||||||||
pass | 6940663 | 2022-07-20 14:49:23 | 2022-07-21 07:16:43 | 2022-07-21 07:55:33 | 0:38:50 | 0:31:15 | 0:07:35 | smithi | main | centos | 8.stream | rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{pg-warn whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-inline/yes 4-verify} 2-client 3-upgrade-with-workload 4-verify}} | 2 | |
fail | 6940664 | 2022-07-20 14:49:24 | 2022-07-21 07:16:44 | 2022-07-21 07:29:16 | 0:12:32 | 0:04:59 | 0:07:33 | smithi | main | ubuntu | 20.04 | rados/dashboard/{centos_8.stream_container_tools clusters/{2-node-mgr} debug/mgr mon_election/classic random-objectstore$/{bluestore-stupid} supported-random-distro$/{ubuntu_latest} tasks/dashboard} | 2 | |
Failure Reason:
Command failed on smithi077 with status 1: 'TESTDIR=/home/ubuntu/cephtest bash -s' |
||||||||||||||
fail | 6940665 | 2022-07-20 14:49:25 | 2022-07-21 07:18:04 | 2022-07-21 07:48:59 | 0:30:55 | 0:22:39 | 0:08:16 | smithi | main | centos | 8.stream | rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.stream_container_tools 1-bootstrap/16.2.4 1-start 2-nfs 3-upgrade-with-workload 4-final} | 2 | |
Failure Reason:
Command failed on smithi038 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image docker.io/ceph/ceph:v16.2.4 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 59799916-08c7-11ed-842f-001a4aab830c -e sha1=af36a7f88905b5612e28e14aa231eff66672a7b3 -- bash -c \'ceph versions | jq -e \'"\'"\'.overall | length == 1\'"\'"\'\'' |
||||||||||||||
fail | 6940666 | 2022-07-20 14:49:27 | 2022-07-21 07:18:35 | 2022-07-21 07:41:21 | 0:22:46 | 0:14:41 | 0:08:05 | smithi | main | centos | 8.stream | rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/connectivity task/test_cephadm} | 1 | |
Failure Reason:
Command failed (workunit test cephadm/test_cephadm.sh) on smithi098 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=b15d03d5395956e9279c4fe4db112ff61db696a0 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_cephadm.sh' |
||||||||||||||
pass | 6940667 | 2022-07-20 14:49:28 | 2022-07-21 07:18:35 | 2022-07-21 08:35:28 | 1:16:53 | 1:07:20 | 0:09:33 | smithi | main | centos | 8.stream | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-bitmap rados tasks/rados_api_tests validater/valgrind} | 2 | |
fail | 6940668 | 2022-07-20 14:49:29 | 2022-07-21 07:18:36 | 2022-07-21 07:35:56 | 0:17:20 | 0:07:47 | 0:09:33 | smithi | main | centos | 8.stream | rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/classic task/test_cephadm_repos} | 1 | |
Failure Reason:
Command failed (workunit test cephadm/test_repos.sh) on smithi142 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=b15d03d5395956e9279c4fe4db112ff61db696a0 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_repos.sh' |
||||||||||||||
fail | 6940669 | 2022-07-20 14:49:30 | 2022-07-21 07:18:56 | 2022-07-21 07:48:23 | 0:29:27 | 0:21:42 | 0:07:45 | smithi | main | ubuntu | 18.04 | rados/rook/smoke/{0-distro/ubuntu_18.04 0-kubeadm 1-rook 2-workload/radosbench 3-final cluster/1-node k8s/1.21 net/calico rook/1.6.2} | 1 | |
Failure Reason:
'wait for operator' reached maximum tries (90) after waiting for 900 seconds |
||||||||||||||
fail | 6940670 | 2022-07-20 14:49:31 | 2022-07-21 07:18:56 | 2022-07-21 07:44:21 | 0:25:25 | 0:16:34 | 0:08:51 | smithi | main | ubuntu | 18.04 | rados/cephadm/osds/{0-distro/ubuntu_18.04 0-nvme-loop 1-start 2-ops/rm-zap-add} | 2 | |
Failure Reason:
Command failed on smithi141 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:af36a7f88905b5612e28e14aa231eff66672a7b3 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 25b2bf0e-08c7-11ed-842f-001a4aab830c -- bash -c \'set -e\nset -x\nceph orch ps\nceph orch device ls\nDEVID=$(ceph device ls | grep osd.1 | awk \'"\'"\'{print $1}\'"\'"\')\nHOST=$(ceph orch device ls | grep $DEVID | awk \'"\'"\'{print $1}\'"\'"\')\nDEV=$(ceph orch device ls | grep $DEVID | awk \'"\'"\'{print $2}\'"\'"\')\necho "host $HOST, dev $DEV, devid $DEVID"\nceph orch osd rm 1\nwhile ceph orch osd rm status | grep ^1 ; do sleep 5 ; done\nceph orch device zap $HOST $DEV --force\nceph orch daemon add osd $HOST:$DEV\nwhile ! ceph osd dump | grep osd.1 | grep up ; do sleep 5 ; done\n\'' |
||||||||||||||
pass | 6940671 | 2022-07-20 14:49:33 | 2022-07-21 07:20:47 | 2022-07-21 07:47:30 | 0:26:43 | 0:17:50 | 0:08:53 | smithi | main | centos | 8.stream | rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/classic task/test_cephadm} | 1 | |
fail | 6940672 | 2022-07-20 14:49:34 | 2022-07-21 07:20:48 | 2022-07-21 07:54:03 | 0:33:15 | 0:24:10 | 0:09:05 | smithi | main | centos | 8.stream | rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.stream_container_tools 1-bootstrap/octopus 1-start 2-nfs 3-upgrade-with-workload 4-final} | 2 | |
Failure Reason:
Command failed on smithi129 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image docker.io/ceph/ceph:v15 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f404abce-08c7-11ed-842f-001a4aab830c -e sha1=af36a7f88905b5612e28e14aa231eff66672a7b3 -- bash -c \'ceph versions | jq -e \'"\'"\'.overall | length == 1\'"\'"\'\'' |
||||||||||||||
fail | 6940673 | 2022-07-20 14:49:35 | 2022-07-21 07:21:18 | 2022-07-21 07:38:18 | 0:17:00 | 0:08:19 | 0:08:41 | smithi | main | centos | 8.stream | rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/connectivity task/test_cephadm_repos} | 1 | |
Failure Reason:
Command failed (workunit test cephadm/test_repos.sh) on smithi164 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=b15d03d5395956e9279c4fe4db112ff61db696a0 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_repos.sh' |
||||||||||||||
pass | 6940674 | 2022-07-20 14:49:36 | 2022-07-21 07:21:18 | 2022-07-21 08:01:10 | 0:39:52 | 0:28:47 | 0:11:05 | smithi | main | centos | 8.stream | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-zlib rados tasks/rados_api_tests validater/lockdep} | 2 | |
pass | 6940675 | 2022-07-20 14:49:38 | 2022-07-21 07:22:09 | 2022-07-21 08:11:22 | 0:49:13 | 0:39:56 | 0:09:17 | smithi | main | centos | 8.stream | rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{pg-warn whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-inline/no 4-verify} 2-client 3-upgrade-with-workload 4-verify}} | 2 | |
pass | 6940676 | 2022-07-20 14:49:39 | 2022-07-21 07:22:29 | 2022-07-21 08:05:40 | 0:43:11 | 0:34:26 | 0:08:45 | smithi | main | centos | 8.stream | rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{pg-warn whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-inline/yes 4-verify} 2-client 3-upgrade-with-workload 4-verify}} | 2 | |
pass | 6940677 | 2022-07-20 14:49:40 | 2022-07-21 07:22:30 | 2022-07-21 08:20:35 | 0:58:05 | 0:49:59 | 0:08:06 | smithi | main | ubuntu | 20.04 | rados/singleton/{all/ec-inconsistent-hinfo mon_election/connectivity msgr-failures/none msgr/async-v2only objectstore/bluestore-comp-snappy rados supported-random-distro$/{ubuntu_latest}} | 1 | |
pass | 6940678 | 2022-07-20 14:49:41 | 2022-07-21 07:22:30 | 2022-07-21 09:54:59 | 2:32:29 | 2:22:26 | 0:10:03 | smithi | main | centos | 8.stream | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-comp-zstd rados tasks/rados_cls_all validater/valgrind} | 2 | |
fail | 6940679 | 2022-07-20 14:49:43 | 2022-07-21 07:23:40 | 2022-07-21 07:58:43 | 0:35:03 | 0:24:43 | 0:10:20 | smithi | main | centos | 8.stream | rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.stream_container_tools 1-bootstrap/16.2.4 1-start 2-nfs 3-upgrade-with-workload 4-final} | 2 | |
Failure Reason:
Command failed on smithi089 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image docker.io/ceph/ceph:v16.2.4 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 885caaa6-08c8-11ed-842f-001a4aab830c -e sha1=af36a7f88905b5612e28e14aa231eff66672a7b3 -- bash -c \'ceph versions | jq -e \'"\'"\'.overall | length == 1\'"\'"\'\'' |
||||||||||||||
fail | 6940680 | 2022-07-20 14:49:44 | 2022-07-21 07:23:41 | 2022-07-21 07:39:14 | 0:15:33 | 0:08:53 | 0:06:40 | smithi | main | ubuntu | 20.04 | rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 1-rook 2-workload/none 3-final cluster/3-node k8s/1.21 net/calico rook/master} | 3 | |
Failure Reason:
[Errno 2] Cannot find file on the remote 'ubuntu@smithi064.front.sepia.ceph.com': 'rook/cluster/examples/kubernetes/ceph/operator.yaml' |
||||||||||||||
fail | 6940681 | 2022-07-20 14:49:45 | 2022-07-21 07:24:01 | 2022-07-21 07:53:59 | 0:29:58 | 0:19:47 | 0:10:11 | smithi | main | centos | 8.stream | rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/connectivity task/test_cephadm} | 1 | |
Failure Reason:
SELinux denials found on ubuntu@smithi125.front.sepia.ceph.com: ['type=AVC msg=audit(1658389876.071:18063): avc: denied { ioctl } for pid=119751 comm="iptables" path="/var/lib/containers/storage/overlay/49fa22b8e50aa3c2215f62ccab460bf3a6cb803c4d66e6cd420ab2a99b74c5a5/merged" dev="overlay" ino=3936453 scontext=system_u:system_r:iptables_t:s0 tcontext=system_u:object_r:container_file_t:s0:c1022,c1023 tclass=dir permissive=1'] |
||||||||||||||
fail | 6940682 | 2022-07-20 14:49:46 | 2022-07-21 07:25:12 | 2022-07-21 07:44:11 | 0:18:59 | 0:10:14 | 0:08:45 | smithi | main | centos | 8.stream | rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/classic task/test_cephadm_repos} | 1 | |
Failure Reason:
Command failed (workunit test cephadm/test_repos.sh) on smithi154 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=b15d03d5395956e9279c4fe4db112ff61db696a0 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_repos.sh' |
||||||||||||||
pass | 6940683 | 2022-07-20 14:49:48 | 2022-07-21 07:25:12 | 2022-07-21 08:38:10 | 1:12:58 | 1:03:10 | 0:09:48 | smithi | main | centos | 8.stream | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-low-osd-mem-target rados tasks/rados_api_tests validater/valgrind} | 2 | |
pass | 6940684 | 2022-07-20 14:49:49 | 2022-07-21 07:25:13 | 2022-07-21 07:57:27 | 0:32:14 | 0:20:42 | 0:11:32 | smithi | main | centos | 8.stream | rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/classic task/test_cephadm} | 1 | |
fail | 6940685 | 2022-07-20 14:49:50 | 2022-07-21 07:25:13 | 2022-07-21 08:00:11 | 0:34:58 | 0:25:46 | 0:09:12 | smithi | main | centos | 8.stream | rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.stream_container_tools 1-bootstrap/octopus 1-start 2-nfs 3-upgrade-with-workload 4-final} | 2 | |
Failure Reason:
Command failed on smithi035 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image docker.io/ceph/ceph:v15 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid cb6abfa4-08c8-11ed-842f-001a4aab830c -e sha1=af36a7f88905b5612e28e14aa231eff66672a7b3 -- bash -c \'ceph versions | jq -e \'"\'"\'.overall | length == 1\'"\'"\'\'' |
||||||||||||||
fail | 6940686 | 2022-07-20 14:49:51 | 2022-07-21 07:25:13 | 2022-07-21 08:05:17 | 0:40:04 | 0:32:15 | 0:07:49 | smithi | main | ubuntu | 18.04 | rados/rook/smoke/{0-distro/ubuntu_18.04 0-kubeadm 1-rook 2-workload/radosbench 3-final cluster/3-node k8s/1.21 net/calico rook/1.6.2} | 3 | |
Failure Reason:
'check osd count' reached maximum tries (90) after waiting for 900 seconds |