User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Pass | Fail | Dead |
---|---|---|---|---|---|---|---|---|---|---|---|
yuriw | 2022-05-05 14:22:36 | 2022-05-05 15:31:10 | 2022-05-05 22:32:34 | 7:01:24 | rados | pacific | smithi | 73636a1 | 16 | 11 | 1 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
fail | 6823234 | 2022-05-05 14:24:24 | 2022-05-05 15:31:10 | 2022-05-05 15:42:22 | 0:11:12 | 0:04:09 | 0:07:03 | smithi | master | ubuntu | 20.04 | rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 1-rook 2-workload/radosbench 3-final cluster/1-node k8s/1.21 net/calico rook/master} | 1 | |
Failure Reason:
[Errno 2] Cannot find file on the remote 'ubuntu@smithi080.front.sepia.ceph.com': 'rook/cluster/examples/kubernetes/ceph/operator.yaml' |
||||||||||||||
fail | 6823235 | 2022-05-05 14:24:25 | 2022-05-05 15:31:10 | 2022-05-05 15:42:15 | 0:11:05 | 0:02:22 | 0:08:43 | smithi | master | ubuntu | 20.04 | rados/dashboard/{centos_8.stream_container_tools clusters/{2-node-mgr} debug/mgr mon_election/classic random-objectstore$/{bluestore-comp-lz4} supported-random-distro$/{ubuntu_latest} tasks/dashboard} | 2 | |
Failure Reason:
Command failed on smithi148 with status 1: 'TESTDIR=/home/ubuntu/cephtest bash -s' |
||||||||||||||
pass | 6823236 | 2022-05-05 14:24:26 | 2022-05-05 15:32:41 | 2022-05-05 16:09:05 | 0:36:24 | 0:28:30 | 0:07:54 | smithi | master | centos | 8.stream | rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{pg-warn whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-inline/no 4-verify} 2-client 3-upgrade-with-workload 4-verify}} | 2 | |
fail | 6823237 | 2022-05-05 14:24:27 | 2022-05-05 15:34:01 | 2022-05-05 16:00:22 | 0:26:21 | 0:19:01 | 0:07:20 | smithi | master | centos | 8.stream | rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.stream_container_tools 1-bootstrap/16.2.4 1-start 2-nfs 3-upgrade-with-workload 4-final} | 2 | |
Failure Reason:
Command failed on smithi045 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image docker.io/ceph/ceph:v16.2.4 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 430a63dc-cc8a-11ec-8c39-001a4aab830c -e sha1=73636a1b00037ff974bcdc969b009c5ecec626cc -- bash -c \'ceph versions | jq -e \'"\'"\'.overall | length == 1\'"\'"\'\'' |
||||||||||||||
pass | 6823238 | 2022-05-05 14:24:29 | 2022-05-05 15:35:22 | 2022-05-05 16:23:30 | 0:48:08 | 0:41:18 | 0:06:50 | smithi | master | rhel | 8.4 | rados/cephadm/with-work/{0-distro/rhel_8.4_container_tools_rhel8 fixed-2 mode/packaged mon_election/classic msgr/async-v1only start tasks/rados_api_tests} | 2 | |
pass | 6823239 | 2022-05-05 14:24:30 | 2022-05-05 15:36:22 | 2022-05-05 15:55:36 | 0:19:14 | 0:12:48 | 0:06:26 | smithi | master | centos | 8.stream | rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/connectivity task/test_cephadm} | 1 | |
pass | 6823240 | 2022-05-05 14:24:32 | 2022-05-05 15:36:33 | 2022-05-05 16:12:38 | 0:36:05 | 0:28:51 | 0:07:14 | smithi | master | centos | 8.stream | rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{pg-warn whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-inline/yes 4-verify} 2-client 3-upgrade-with-workload 4-verify}} | 2 | |
pass | 6823241 | 2022-05-05 14:24:34 | 2022-05-05 15:36:53 | 2022-05-05 16:12:28 | 0:35:35 | 0:27:54 | 0:07:41 | smithi | master | centos | 8.stream | rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{pg-warn whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-inline/yes 4-verify} 2-client 3-upgrade-with-workload 4-verify}} | 2 | |
fail | 6823243 | 2022-05-05 14:24:35 | 2022-05-05 15:37:04 | 2022-05-05 16:01:52 | 0:24:48 | 0:19:04 | 0:05:44 | smithi | master | ubuntu | 18.04 | rados/rook/smoke/{0-distro/ubuntu_18.04 0-kubeadm 1-rook 2-workload/radosbench 3-final cluster/1-node k8s/1.21 net/calico rook/1.6.2} | 1 | |
Failure Reason:
'wait for operator' reached maximum tries (90) after waiting for 900 seconds |
||||||||||||||
fail | 6823245 | 2022-05-05 14:24:36 | 2022-05-05 15:37:15 | 2022-05-05 15:58:30 | 0:21:15 | 0:12:06 | 0:09:09 | smithi | master | ubuntu | 18.04 | rados/cephadm/osds/{0-distro/ubuntu_18.04 0-nvme-loop 1-start 2-ops/rm-zap-add} | 2 | |
Failure Reason:
Command failed on smithi146 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:73636a1b00037ff974bcdc969b009c5ecec626cc shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 961c405e-cc8a-11ec-8c39-001a4aab830c -- bash -c \'set -e\nset -x\nceph orch ps\nceph orch device ls\nDEVID=$(ceph device ls | grep osd.1 | awk \'"\'"\'{print $1}\'"\'"\')\nHOST=$(ceph orch device ls | grep $DEVID | awk \'"\'"\'{print $1}\'"\'"\')\nDEV=$(ceph orch device ls | grep $DEVID | awk \'"\'"\'{print $2}\'"\'"\')\necho "host $HOST, dev $DEV, devid $DEVID"\nceph orch osd rm 1\nwhile ceph orch osd rm status | grep ^1 ; do sleep 5 ; done\nceph orch device zap $HOST $DEV --force\nceph orch daemon add osd $HOST:$DEV\nwhile ! ceph osd dump | grep osd.1 | grep up ; do sleep 5 ; done\n\'' |
||||||||||||||
fail | 6823247 | 2022-05-05 14:24:37 | 2022-05-05 15:39:26 | 2022-05-05 15:57:46 | 0:18:20 | 0:13:00 | 0:05:20 | smithi | master | centos | 8.stream | rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/classic task/test_cephadm} | 1 | |
Failure Reason:
SELinux denials found on ubuntu@smithi007.front.sepia.ceph.com: ['type=AVC msg=audit(1651766137.551:6414): avc: denied { ioctl } for pid=56128 comm="iptables" path="/var/lib/containers/storage/overlay/8695b22a850c325629c7dee1f01be891ecbda26e9251d5c8eab77f374e855844/merged" dev="overlay" ino=3412425 scontext=system_u:system_r:iptables_t:s0 tcontext=system_u:object_r:container_file_t:s0:c1022,c1023 tclass=dir permissive=1'] |
||||||||||||||
pass | 6823249 | 2022-05-05 14:24:38 | 2022-05-05 15:39:36 | 2022-05-05 16:15:19 | 0:35:43 | 0:28:39 | 0:07:04 | smithi | master | centos | 8.stream | rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{pg-warn whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-inline/yes 4-verify} 2-client 3-upgrade-with-workload 4-verify}} | 2 | |
fail | 6823251 | 2022-05-05 14:24:39 | 2022-05-05 15:41:37 | 2022-05-05 16:09:25 | 0:27:48 | 0:19:27 | 0:08:21 | smithi | master | centos | 8.stream | rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.stream_container_tools 1-bootstrap/octopus 1-start 2-nfs 3-upgrade-with-workload 4-final} | 2 | |
Failure Reason:
Command failed on smithi122 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image docker.io/ceph/ceph:v15 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 3b8cab1e-cc8b-11ec-8c39-001a4aab830c -e sha1=73636a1b00037ff974bcdc969b009c5ecec626cc -- bash -c \'ceph versions | jq -e \'"\'"\'.overall | length == 1\'"\'"\'\'' |
||||||||||||||
pass | 6823253 | 2022-05-05 14:24:40 | 2022-05-05 15:42:38 | 2022-05-05 16:17:55 | 0:35:17 | 0:29:19 | 0:05:58 | smithi | master | centos | 8.stream | rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{pg-warn whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-inline/no 4-verify} 2-client 3-upgrade-with-workload 4-verify}} | 2 | |
pass | 6823255 | 2022-05-05 14:24:41 | 2022-05-05 15:42:59 | 2022-05-05 16:17:04 | 0:34:05 | 0:27:24 | 0:06:41 | smithi | master | centos | 8.stream | rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.stream_container_tools 1-bootstrap/16.2.4 1-start 2-nfs 3-upgrade-with-workload 4-final} | 2 | |
pass | 6823257 | 2022-05-05 14:24:43 | 2022-05-05 15:43:49 | 2022-05-05 16:16:43 | 0:32:54 | 0:24:56 | 0:07:58 | smithi | master | ubuntu | 20.04 | rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/connectivity msgr-failures/few objectstore/bluestore-hybrid rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{ubuntu_latest} thrashers/mapgap thrashosds-health workloads/ec-rados-plugin=jerasure-k=4-m=2} | 3 | |
fail | 6823259 | 2022-05-05 14:24:44 | 2022-05-05 15:46:00 | 2022-05-05 15:58:00 | 0:12:00 | 0:04:49 | 0:07:11 | smithi | master | ubuntu | 20.04 | rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 1-rook 2-workload/none 3-final cluster/3-node k8s/1.21 net/calico rook/master} | 3 | |
Failure Reason:
[Errno 2] Cannot find file on the remote 'ubuntu@smithi079.front.sepia.ceph.com': 'rook/cluster/examples/kubernetes/ceph/operator.yaml' |
||||||||||||||
pass | 6823261 | 2022-05-05 14:24:45 | 2022-05-05 15:46:41 | 2022-05-05 16:06:11 | 0:19:30 | 0:12:59 | 0:06:31 | smithi | master | centos | 8.stream | rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/connectivity task/test_cephadm} | 1 | |
pass | 6823263 | 2022-05-05 14:24:46 | 2022-05-05 15:49:12 | 2022-05-05 16:24:46 | 0:35:34 | 0:27:35 | 0:07:59 | smithi | master | centos | 8.stream | rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{pg-warn whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-inline/yes 4-verify} 2-client 3-upgrade-with-workload 4-verify}} | 2 | |
pass | 6823265 | 2022-05-05 14:24:47 | 2022-05-05 15:49:53 | 2022-05-05 16:47:46 | 0:57:53 | 0:50:29 | 0:07:24 | smithi | master | centos | 8.stream | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-low-osd-mem-target rados tasks/rados_api_tests validater/valgrind} | 2 | |
pass | 6823267 | 2022-05-05 14:24:48 | 2022-05-05 15:50:43 | 2022-05-05 16:29:04 | 0:38:21 | 0:31:33 | 0:06:48 | smithi | master | ubuntu | 20.04 | rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/connectivity msgr-failures/osd-dispatch-delay objectstore/bluestore-stupid rados recovery-overrides/{more-active-recovery} supported-random-distro$/{ubuntu_latest} thrashers/pggrow thrashosds-health workloads/ec-rados-plugin=jerasure-k=4-m=2} | 3 | |
pass | 6823269 | 2022-05-05 14:24:50 | 2022-05-05 15:51:25 | 2022-05-05 16:29:24 | 0:37:59 | 0:27:33 | 0:10:26 | smithi | master | centos | 8.stream | rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{pg-warn whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-inline/yes 4-verify} 2-client 3-upgrade-with-workload 4-verify}} | 2 | |
dead | 6823270 | 2022-05-05 14:24:51 | 2022-05-05 15:53:35 | 2022-05-05 22:32:34 | 6:38:59 | smithi | master | ubuntu | 18.04 | rados/cephadm/osds/{0-distro/ubuntu_18.04 0-nvme-loop 1-start 2-ops/rm-zap-flag} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 6823271 | 2022-05-05 14:24:52 | 2022-05-05 15:53:46 | 2022-05-05 16:14:36 | 0:20:50 | 0:13:25 | 0:07:25 | smithi | master | centos | 8.stream | rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/classic task/test_cephadm} | 1 | |
Failure Reason:
SELinux denials found on ubuntu@smithi100.front.sepia.ceph.com: ['type=AVC msg=audit(1651767052.822:6417): avc: denied { ioctl } for pid=56107 comm="iptables" path="/var/lib/containers/storage/overlay/3c49bcd713af93aa62b9d4385271cd6641e8640d7acede225ef876e15bb3f967/merged" dev="overlay" ino=3412407 scontext=system_u:system_r:iptables_t:s0 tcontext=system_u:object_r:container_file_t:s0:c1022,c1023 tclass=dir permissive=1'] |
||||||||||||||
pass | 6823272 | 2022-05-05 14:24:53 | 2022-05-05 15:53:46 | 2022-05-05 16:29:13 | 0:35:27 | 0:29:12 | 0:06:15 | smithi | master | centos | 8.stream | rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{pg-warn whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-inline/yes 4-verify} 2-client 3-upgrade-with-workload 4-verify}} | 2 | |
pass | 6823273 | 2022-05-05 14:24:54 | 2022-05-05 15:54:36 | 2022-05-05 16:27:51 | 0:33:15 | 0:25:09 | 0:08:06 | smithi | master | ubuntu | 20.04 | rados/singleton-nomsgr/{all/multi-backfill-reject mon_election/connectivity rados supported-random-distro$/{ubuntu_latest}} | 2 | |
fail | 6823274 | 2022-05-05 14:24:55 | 2022-05-05 15:56:27 | 2022-05-05 16:23:57 | 0:27:30 | 0:19:42 | 0:07:48 | smithi | master | centos | 8.stream | rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.stream_container_tools 1-bootstrap/octopus 1-start 2-nfs 3-upgrade-with-workload 4-final} | 2 | |
Failure Reason:
Command failed on smithi016 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image docker.io/ceph/ceph:v15 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 4b454e92-cc8d-11ec-8c39-001a4aab830c -e sha1=73636a1b00037ff974bcdc969b009c5ecec626cc -- bash -c \'ceph versions | jq -e \'"\'"\'.overall | length == 1\'"\'"\'\'' |
||||||||||||||
fail | 6823275 | 2022-05-05 14:24:56 | 2022-05-05 15:57:27 | 2022-05-05 16:29:23 | 0:31:56 | 0:25:12 | 0:06:44 | smithi | master | ubuntu | 18.04 | rados/rook/smoke/{0-distro/ubuntu_18.04 0-kubeadm 1-rook 2-workload/radosbench 3-final cluster/3-node k8s/1.21 net/calico rook/1.6.2} | 3 | |
Failure Reason:
'check osd count' reached maximum tries (90) after waiting for 900 seconds |