User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Pass | Fail | Dead |
---|---|---|---|---|---|---|---|---|---|---|---|
yuriw | 2022-04-30 17:01:51 | 2022-05-01 02:43:57 | 2022-05-01 09:45:44 | 7:01:47 | rados | wip-yuri4-testing-2022-04-29-1830-pacific | smithi | 5fb400b | 20 | 13 | 2 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
pass | 6816216 | 2022-04-30 17:03:38 | 2022-05-01 02:43:57 | 2022-05-01 08:11:07 | 5:27:10 | 4:47:27 | 0:39:43 | smithi | master | ubuntu | 20.04 | rados/objectstore/{backends/objectstore supported-random-distro$/{ubuntu_latest}} | 1 | |
pass | 6816217 | 2022-04-30 17:03:39 | 2022-05-01 02:43:58 | 2022-05-01 03:04:28 | 0:20:30 | 0:10:52 | 0:09:38 | smithi | master | ubuntu | 20.04 | rados/perf/{ceph mon_election/classic objectstore/bluestore-comp openstack scheduler/dmclock_1Shard_16Threads settings/optimized ubuntu_latest workloads/fio_4M_rand_rw} | 1 | |
fail | 6816218 | 2022-04-30 17:03:40 | 2022-05-01 02:46:58 | 2022-05-01 03:00:19 | 0:13:21 | 0:06:25 | 0:06:56 | smithi | master | ubuntu | 20.04 | rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 1-rook 2-workload/radosbench 3-final cluster/1-node k8s/1.21 net/calico rook/master} | 1 | |
Failure Reason:
[Errno 2] Cannot find file on the remote 'ubuntu@smithi124.front.sepia.ceph.com': 'rook/cluster/examples/kubernetes/ceph/operator.yaml' |
||||||||||||||
pass | 6816219 | 2022-04-30 17:03:41 | 2022-05-01 02:46:58 | 2022-05-01 03:16:12 | 0:29:14 | 0:19:31 | 0:09:43 | smithi | master | ubuntu | 20.04 | rados/cephadm/smoke/{0-nvme-loop distro/ubuntu_20.04 fixed-2 mon_election/connectivity start} | 2 | |
pass | 6816220 | 2022-04-30 17:03:42 | 2022-05-01 02:46:59 | 2022-05-01 03:23:47 | 0:36:48 | 0:28:32 | 0:08:16 | smithi | master | centos | 8.stream | rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{pg-warn whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-inline/no 4-verify} 2-client 3-upgrade-with-workload 4-verify}} | 2 | |
fail | 6816221 | 2022-04-30 17:03:43 | 2022-05-01 02:48:39 | 2022-05-01 03:15:46 | 0:27:07 | 0:20:11 | 0:06:56 | smithi | master | centos | 8.stream | rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.stream_container_tools 1-bootstrap/16.2.4 1-start 2-nfs 3-upgrade-with-workload 4-final} | 2 | |
Failure Reason:
Command failed on smithi129 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image docker.io/ceph/ceph:v16.2.4 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid c53dfbc2-c8fa-11ec-8c39-001a4aab830c -e sha1=5fb400bd707676c39bd35235907ae7d994946974 -- bash -c \'ceph versions | jq -e \'"\'"\'.overall | length == 1\'"\'"\'\'' |
||||||||||||||
fail | 6816222 | 2022-04-30 17:03:44 | 2022-05-01 02:48:40 | 2022-05-01 03:09:32 | 0:20:52 | 0:14:58 | 0:05:54 | smithi | master | centos | 8.stream | rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/connectivity task/test_cephadm} | 1 | |
Failure Reason:
SELinux denials found on ubuntu@smithi081.front.sepia.ceph.com: ['type=AVC msg=audit(1651374437.380:17983): avc: denied { ioctl } for pid=107041 comm="iptables" path="/var/lib/containers/storage/overlay/90653893b702f2d07cc835fc5798562bf62af15c85e164e397872030cdb8efae/merged" dev="overlay" ino=3412417 scontext=system_u:system_r:iptables_t:s0 tcontext=system_u:object_r:container_file_t:s0:c1022,c1023 tclass=dir permissive=1'] |
||||||||||||||
fail | 6816223 | 2022-04-30 17:03:45 | 2022-05-01 02:48:50 | 2022-05-01 04:11:40 | 1:22:50 | 1:16:49 | 0:06:01 | smithi | master | centos | 8.stream | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-bitmap rados tasks/rados_api_tests validater/valgrind} | 2 | |
Failure Reason:
"2022-05-01T04:00:16.110805+0000 mon.a (mon.0) 6577 : cluster [WRN] Health check failed: 1 daemons have recently crashed (RECENT_CRASH)" in cluster log |
||||||||||||||
pass | 6816224 | 2022-04-30 17:03:46 | 2022-05-01 02:48:50 | 2022-05-01 03:22:33 | 0:33:43 | 0:27:11 | 0:06:32 | smithi | master | centos | 8.stream | rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.stream_container_tools 1-bootstrap/16.2.5 1-start 2-nfs 3-upgrade-with-workload 4-final} | 2 | |
pass | 6816225 | 2022-04-30 17:03:47 | 2022-05-01 02:49:41 | 2022-05-01 03:24:46 | 0:35:05 | 0:29:05 | 0:06:00 | smithi | master | centos | 8.stream | rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{pg-warn whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-inline/yes 4-verify} 2-client 3-upgrade-with-workload 4-verify}} | 2 | |
pass | 6816226 | 2022-04-30 17:03:48 | 2022-05-01 02:49:41 | 2022-05-01 03:25:03 | 0:35:22 | 0:29:01 | 0:06:21 | smithi | master | centos | 8.stream | rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{pg-warn whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-inline/no 4-verify} 2-client 3-upgrade-with-workload 4-verify}} | 2 | |
fail | 6816227 | 2022-04-30 17:03:49 | 2022-05-01 02:50:01 | 2022-05-01 03:12:15 | 0:22:14 | 0:14:15 | 0:07:59 | smithi | master | ubuntu | 18.04 | rados/cephadm/osds/{0-distro/ubuntu_18.04 0-nvme-loop 1-start 2-ops/rm-zap-add} | 2 | |
Failure Reason:
Command failed on smithi049 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:5fb400bd707676c39bd35235907ae7d994946974 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid ddba0a60-c8fa-11ec-8c39-001a4aab830c -- bash -c \'set -e\nset -x\nceph orch ps\nceph orch device ls\nDEVID=$(ceph device ls | grep osd.1 | awk \'"\'"\'{print $1}\'"\'"\')\nHOST=$(ceph orch device ls | grep $DEVID | awk \'"\'"\'{print $1}\'"\'"\')\nDEV=$(ceph orch device ls | grep $DEVID | awk \'"\'"\'{print $2}\'"\'"\')\necho "host $HOST, dev $DEV, devid $DEVID"\nceph orch osd rm 1\nwhile ceph orch osd rm status | grep ^1 ; do sleep 5 ; done\nceph orch device zap $HOST $DEV --force\nceph orch daemon add osd $HOST:$DEV\nwhile ! ceph osd dump | grep osd.1 | grep up ; do sleep 5 ; done\n\'' |
||||||||||||||
pass | 6816228 | 2022-04-30 17:03:50 | 2022-05-01 02:50:02 | 2022-05-01 03:29:32 | 0:39:30 | 0:33:26 | 0:06:04 | smithi | master | ubuntu | 20.04 | rados/thrash-erasure-code-overwrites/{bluestore-bitmap ceph clusters/{fixed-2 openstack} fast/fast mon_election/classic msgr-failures/osd-dispatch-delay rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{ubuntu_latest} thrashers/careful thrashosds-health workloads/ec-pool-snaps-few-objects-overwrites} | 2 | |
fail | 6816229 | 2022-04-30 17:03:51 | 2022-05-01 02:50:02 | 2022-05-01 03:10:51 | 0:20:49 | 0:15:08 | 0:05:41 | smithi | master | centos | 8.stream | rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/classic task/test_cephadm} | 1 | |
Failure Reason:
SELinux denials found on ubuntu@smithi057.front.sepia.ceph.com: ['type=AVC msg=audit(1651374533.081:17987): avc: denied { ioctl } for pid=106962 comm="iptables" path="/var/lib/containers/storage/overlay/f69dbee2d0c2482dd45c186ccd36dfc28cf761278e4a73b53c5ca43f3afb55d7/merged" dev="overlay" ino=3410457 scontext=system_u:system_r:iptables_t:s0 tcontext=system_u:object_r:container_file_t:s0:c1022,c1023 tclass=dir permissive=1'] |
||||||||||||||
pass | 6816230 | 2022-04-30 17:03:52 | 2022-05-01 02:50:02 | 2022-05-01 03:28:52 | 0:38:50 | 0:30:42 | 0:08:08 | smithi | master | centos | 8.stream | rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{pg-warn whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-inline/yes 4-verify} 2-client 3-upgrade-with-workload 4-verify}} | 2 | |
fail | 6816231 | 2022-04-30 17:03:53 | 2022-05-01 02:51:33 | 2022-05-01 03:20:30 | 0:28:57 | 0:21:41 | 0:07:16 | smithi | master | centos | 8.stream | rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.stream_container_tools 1-bootstrap/octopus 1-start 2-nfs 3-upgrade-with-workload 4-final} | 2 | |
Failure Reason:
Command failed on smithi002 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image docker.io/ceph/ceph:v15 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 29997434-c8fb-11ec-8c39-001a4aab830c -e sha1=5fb400bd707676c39bd35235907ae7d994946974 -- bash -c \'ceph versions | jq -e \'"\'"\'.overall | length == 1\'"\'"\'\'' |
||||||||||||||
pass | 6816232 | 2022-04-30 17:03:54 | 2022-05-01 02:51:33 | 2022-05-01 03:30:12 | 0:38:39 | 0:25:39 | 0:13:00 | smithi | master | ubuntu | 20.04 | rados/cephadm/smoke/{0-nvme-loop distro/ubuntu_20.04 fixed-2 mon_election/connectivity start} | 2 | |
fail | 6816233 | 2022-04-30 17:03:55 | 2022-05-01 02:54:34 | 2022-05-01 03:06:28 | 0:11:54 | 0:04:20 | 0:07:34 | smithi | master | ubuntu | 20.04 | rados/dashboard/{centos_8.stream_container_tools clusters/{2-node-mgr} debug/mgr mon_election/connectivity random-objectstore$/{bluestore-hybrid} supported-random-distro$/{ubuntu_latest} tasks/dashboard} | 2 | |
Failure Reason:
Command failed on smithi036 with status 1: 'TESTDIR=/home/ubuntu/cephtest bash -s' |
||||||||||||||
pass | 6816234 | 2022-04-30 17:03:56 | 2022-05-01 02:54:34 | 2022-05-01 03:33:13 | 0:38:39 | 0:29:39 | 0:09:00 | smithi | master | centos | 8.stream | rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{pg-warn whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-inline/no 4-verify} 2-client 3-upgrade-with-workload 4-verify}} | 2 | |
fail | 6816235 | 2022-04-30 17:03:57 | 2022-05-01 02:55:45 | 2022-05-01 03:24:16 | 0:28:31 | 0:20:23 | 0:08:08 | smithi | master | centos | 8.stream | rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.stream_container_tools 1-bootstrap/16.2.4 1-start 2-nfs 3-upgrade-with-workload 4-final} | 2 | |
Failure Reason:
Command failed on smithi196 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image docker.io/ceph/ceph:v16.2.4 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f07f8f16-c8fb-11ec-8c39-001a4aab830c -e sha1=5fb400bd707676c39bd35235907ae7d994946974 -- bash -c \'ceph versions | jq -e \'"\'"\'.overall | length == 1\'"\'"\'\'' |
||||||||||||||
fail | 6816236 | 2022-04-30 17:03:58 | 2022-05-01 02:57:05 | 2022-05-01 03:11:16 | 0:14:11 | 0:07:00 | 0:07:11 | smithi | master | ubuntu | 20.04 | rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 1-rook 2-workload/none 3-final cluster/3-node k8s/1.21 net/calico rook/master} | 3 | |
Failure Reason:
[Errno 2] Cannot find file on the remote 'ubuntu@smithi061.front.sepia.ceph.com': 'rook/cluster/examples/kubernetes/ceph/operator.yaml' |
||||||||||||||
fail | 6816237 | 2022-04-30 17:03:59 | 2022-05-01 02:57:46 | 2022-05-01 03:18:43 | 0:20:57 | 0:14:56 | 0:06:01 | smithi | master | centos | 8.stream | rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/connectivity task/test_cephadm} | 1 | |
Failure Reason:
SELinux denials found on ubuntu@smithi040.front.sepia.ceph.com: ['type=AVC msg=audit(1651374990.057:17985): avc: denied { ioctl } for pid=106786 comm="iptables" path="/var/lib/containers/storage/overlay/3857880d85a58c74db4c6985b9f541f9e32276922d9ee618bf6b879affc2b592/merged" dev="overlay" ino=3412423 scontext=system_u:system_r:iptables_t:s0 tcontext=system_u:object_r:container_file_t:s0:c1022,c1023 tclass=dir permissive=1'] |
||||||||||||||
pass | 6816238 | 2022-04-30 17:04:00 | 2022-05-01 02:57:46 | 2022-05-01 03:32:03 | 0:34:17 | 0:26:54 | 0:07:23 | smithi | master | centos | 8.stream | rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.stream_container_tools 1-bootstrap/16.2.5 1-start 2-nfs 3-upgrade-with-workload 4-final} | 2 | |
dead | 6816239 | 2022-04-30 17:04:01 | 2022-05-01 02:58:47 | 2022-05-01 09:45:44 | 6:46:57 | smithi | master | centos | 8.stream | rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/connectivity msgr-failures/osd-dispatch-delay objectstore/bluestore-stupid rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{centos_8} thrashers/pggrow thrashosds-health workloads/ec-rados-plugin=jerasure-k=4-m=2} | 3 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 6816240 | 2022-04-30 17:04:02 | 2022-05-01 03:01:57 | 2022-05-01 03:36:35 | 0:34:38 | 0:28:58 | 0:05:40 | smithi | master | centos | 8.stream | rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{pg-warn whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-inline/yes 4-verify} 2-client 3-upgrade-with-workload 4-verify}} | 2 | |
pass | 6816241 | 2022-04-30 17:04:03 | 2022-05-01 03:01:58 | 2022-05-01 03:25:56 | 0:23:58 | 0:17:11 | 0:06:47 | smithi | master | rhel | 8.4 | rados/cephadm/smoke-roleless/{0-distro/rhel_8.4_container_tools_rhel8 0-nvme-loop 1-start 2-services/nfs-ingress-rgw-bucket 3-final} | 2 | |
pass | 6816242 | 2022-04-30 17:04:04 | 2022-05-01 03:01:58 | 2022-05-01 03:37:12 | 0:35:14 | 0:28:20 | 0:06:54 | smithi | master | centos | 8.stream | rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{pg-warn whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-inline/no 4-verify} 2-client 3-upgrade-with-workload 4-verify}} | 2 | |
pass | 6816243 | 2022-04-30 17:04:05 | 2022-05-01 03:02:08 | 2022-05-01 03:39:01 | 0:36:53 | 0:29:36 | 0:07:17 | smithi | master | centos | 8.stream | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-partial-recovery} 3-scrub-overrides/{default} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{centos_8} thrashers/mapgap thrashosds-health workloads/snaps-few-objects-balanced} | 2 | |
pass | 6816244 | 2022-04-30 17:04:06 | 2022-05-01 03:02:09 | 2022-05-01 03:29:20 | 0:27:11 | 0:20:08 | 0:07:03 | smithi | master | rhel | 8.4 | rados/singleton/{all/mon-config mon_election/classic msgr-failures/many msgr/async-v1only objectstore/bluestore-comp-zstd rados supported-random-distro$/{rhel_8}} | 1 | |
pass | 6816245 | 2022-04-30 17:04:07 | 2022-05-01 03:02:19 | 2022-05-01 03:36:00 | 0:33:41 | 0:27:21 | 0:06:20 | smithi | master | ubuntu | 20.04 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/osd-delay msgr/async-v1only objectstore/bluestore-comp-zlib rados supported-random-distro$/{ubuntu_latest} thrashers/morepggrow thrashosds-health workloads/snaps-few-objects-localized} | 2 | |
pass | 6816246 | 2022-04-30 17:04:08 | 2022-05-01 03:02:19 | 2022-05-01 03:38:26 | 0:36:07 | 0:28:39 | 0:07:28 | smithi | master | centos | 8.stream | rados/monthrash/{ceph clusters/9-mons mon_election/connectivity msgr-failures/few msgr/async-v1only objectstore/bluestore-stupid rados supported-random-distro$/{centos_8} thrashers/sync-many workloads/snaps-few-objects} | 2 | |
dead | 6816247 | 2022-04-30 17:04:09 | 2022-05-01 03:03:10 | 2022-05-01 09:41:24 | 6:38:14 | smithi | master | ubuntu | 18.04 | rados/cephadm/osds/{0-distro/ubuntu_18.04 0-nvme-loop 1-start 2-ops/rm-zap-flag} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 6816248 | 2022-04-30 17:04:10 | 2022-05-01 03:03:10 | 2022-05-01 03:25:45 | 0:22:35 | 0:15:14 | 0:07:21 | smithi | master | centos | 8.stream | rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/classic task/test_cephadm} | 1 | |
Failure Reason:
SELinux denials found on ubuntu@smithi148.front.sepia.ceph.com: ['type=AVC msg=audit(1651375431.042:17986): avc: denied { ioctl } for pid=106989 comm="iptables" path="/var/lib/containers/storage/overlay/d2ce2c9f1eb2c6e0d01bc0df2ad693bcc894076e183c03d5526a41dfe03b170e/merged" dev="overlay" ino=3412418 scontext=system_u:system_r:iptables_t:s0 tcontext=system_u:object_r:container_file_t:s0:c1022,c1023 tclass=dir permissive=1'] |
||||||||||||||
pass | 6816249 | 2022-04-30 17:04:11 | 2022-05-01 03:04:31 | 2022-05-01 03:41:48 | 0:37:17 | 0:29:27 | 0:07:50 | smithi | master | centos | 8.stream | rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{pg-warn whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-inline/yes 4-verify} 2-client 3-upgrade-with-workload 4-verify}} | 2 | |
fail | 6816250 | 2022-04-30 17:04:12 | 2022-05-01 03:06:31 | 2022-05-01 03:34:57 | 0:28:26 | 0:20:52 | 0:07:34 | smithi | master | centos | 8.stream | rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.stream_container_tools 1-bootstrap/octopus 1-start 2-nfs 3-upgrade-with-workload 4-final} | 2 | |
Failure Reason:
Command failed on smithi151 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image docker.io/ceph/ceph:v15 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 6d9bc2fc-c8fd-11ec-8c39-001a4aab830c -e sha1=5fb400bd707676c39bd35235907ae7d994946974 -- bash -c \'ceph versions | jq -e \'"\'"\'.overall | length == 1\'"\'"\'\'' |