User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Pass | Fail |
---|---|---|---|---|---|---|---|---|---|---|
yuriw | 2022-06-22 01:17:46 | 2022-06-22 03:15:43 | 2022-06-22 04:57:04 | 1:41:21 | rados | wip-yuri4-testing-2022-06-21-0704-pacific | smithi | 6ba5460 | 10 | 17 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
pass | 6891167 | 2022-06-22 01:19:48 | 2022-06-22 03:15:12 | 2022-06-22 04:57:04 | 1:41:52 | 1:35:28 | 0:06:24 | smithi | main | ubuntu | 18.04 | rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/mimic backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{ubuntu_18.04} mon_election/connectivity msgr-failures/osd-delay rados thrashers/careful thrashosds-health workloads/radosbench} | 3 | |
pass | 6891168 | 2022-06-22 01:19:49 | 2022-06-22 03:15:22 | 2022-06-22 04:05:29 | 0:50:07 | 0:42:03 | 0:08:04 | smithi | main | ubuntu | 20.04 | rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/connectivity msgr-failures/osd-dispatch-delay objectstore/filestore-xfs rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{ubuntu_latest} thrashers/morepggrow thrashosds-health workloads/ec-rados-plugin=jerasure-k=4-m=2} | 3 | |
fail | 6891169 | 2022-06-22 01:19:50 | 2022-06-22 03:15:43 | 2022-06-22 03:26:59 | 0:11:16 | 0:04:40 | 0:06:36 | smithi | main | ubuntu | 20.04 | rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 1-rook 2-workload/radosbench 3-final cluster/1-node k8s/1.21 net/calico rook/master} | 1 | |
Failure Reason:
[Errno 2] Cannot find file on the remote 'ubuntu@smithi142.front.sepia.ceph.com': 'rook/cluster/examples/kubernetes/ceph/operator.yaml' |
||||||||||||||
pass | 6891170 | 2022-06-22 01:19:52 | 2022-06-22 03:15:43 | 2022-06-22 03:52:04 | 0:36:21 | 0:28:03 | 0:08:18 | smithi | main | centos | 8.stream | rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{pg-warn whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-inline/yes 4-verify} 2-client 3-upgrade-with-workload 4-verify}} | 2 | |
fail | 6891171 | 2022-06-22 01:19:53 | 2022-06-22 03:17:04 | 2022-06-22 04:11:01 | 0:53:57 | 0:46:06 | 0:07:51 | smithi | main | ubuntu | 20.04 | rados/singleton-bluestore/{all/cephtool mon_election/connectivity msgr-failures/none msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{ubuntu_latest}} | 1 | |
Failure Reason:
Command failed (workunit test cephtool/test.sh) on smithi203 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=6ba5460a27104908466809fcb14c544d5dfc6522 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh' |
||||||||||||||
pass | 6891172 | 2022-06-22 01:19:55 | 2022-06-22 03:17:14 | 2022-06-22 04:34:32 | 1:17:18 | 1:10:32 | 0:06:46 | smithi | main | centos | 8.stream | rados/dashboard/{centos_8.stream_container_tools clusters/{2-node-mgr} debug/mgr mon_election/classic random-objectstore$/{bluestore-comp-zlib} supported-random-distro$/{centos_8} tasks/dashboard} | 2 | |
fail | 6891173 | 2022-06-22 01:19:56 | 2022-06-22 03:17:14 | 2022-06-22 03:44:28 | 0:27:14 | 0:18:32 | 0:08:42 | smithi | main | centos | 8.stream | rados/cephadm/dashboard/{0-distro/centos_8.stream_container_tools task/test_e2e} | 2 | |
Failure Reason:
Command failed (workunit test cephadm/test_dashboard_e2e.sh) on smithi159 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=6ba5460a27104908466809fcb14c544d5dfc6522 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_dashboard_e2e.sh' |
||||||||||||||
fail | 6891174 | 2022-06-22 01:19:58 | 2022-06-22 03:18:55 | 2022-06-22 03:45:01 | 0:26:06 | 0:17:44 | 0:08:22 | smithi | main | centos | 8.stream | rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.stream_container_tools 1-bootstrap/16.2.4 1-start 2-nfs 3-upgrade-with-workload 4-final} | 2 | |
Failure Reason:
Command failed on smithi110 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image docker.io/ceph/ceph:v16.2.4 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 6a884606-f1db-11ec-8428-001a4aab830c -e sha1=6ba5460a27104908466809fcb14c544d5dfc6522 -- bash -c \'ceph versions | jq -e \'"\'"\'.overall | length == 1\'"\'"\'\'' |
||||||||||||||
fail | 6891175 | 2022-06-22 01:19:59 | 2022-06-22 03:19:55 | 2022-06-22 03:41:33 | 0:21:38 | 0:15:24 | 0:06:14 | smithi | main | rhel | 8.4 | rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/many msgr/async objectstore/bluestore-hybrid rados supported-random-distro$/{rhel_8} tasks/rados_python} | 2 | |
Failure Reason:
Command failed (workunit test rados/test_python.sh) on smithi133 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=6ba5460a27104908466809fcb14c544d5dfc6522 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test_python.sh' |
||||||||||||||
pass | 6891176 | 2022-06-22 01:20:00 | 2022-06-22 03:20:26 | 2022-06-22 03:56:25 | 0:35:59 | 0:27:52 | 0:08:07 | smithi | main | centos | 8.stream | rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{pg-warn whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-inline/no 4-verify} 2-client 3-upgrade-with-workload 4-verify}} | 2 | |
fail | 6891177 | 2022-06-22 01:20:02 | 2022-06-22 03:21:16 | 2022-06-22 04:15:40 | 0:54:24 | 0:48:07 | 0:06:17 | smithi | main | centos | 8.stream | rados/singleton-bluestore/{all/cephtool mon_election/classic msgr-failures/none msgr/async-v1only objectstore/bluestore-bitmap rados supported-random-distro$/{centos_8}} | 1 | |
Failure Reason:
Command failed (workunit test cephtool/test.sh) on smithi018 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=6ba5460a27104908466809fcb14c544d5dfc6522 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh' |
||||||||||||||
fail | 6891178 | 2022-06-22 01:20:03 | 2022-06-22 03:21:17 | 2022-06-22 03:50:07 | 0:28:50 | 0:19:41 | 0:09:09 | smithi | main | ubuntu | 18.04 | rados/rook/smoke/{0-distro/ubuntu_18.04 0-kubeadm 1-rook 2-workload/radosbench 3-final cluster/1-node k8s/1.21 net/calico rook/1.6.2} | 1 | |
Failure Reason:
'wait for operator' reached maximum tries (90) after waiting for 900 seconds |
||||||||||||||
pass | 6891179 | 2022-06-22 01:20:05 | 2022-06-22 03:22:37 | 2022-06-22 03:42:25 | 0:19:48 | 0:12:17 | 0:07:31 | smithi | main | ubuntu | 20.04 | rados/multimon/{clusters/9 mon_election/classic msgr-failures/few msgr/async no_pools objectstore/bluestore-comp-zlib rados supported-random-distro$/{ubuntu_latest} tasks/mon_recovery} | 3 | |
fail | 6891180 | 2022-06-22 01:20:06 | 2022-06-22 03:22:58 | 2022-06-22 03:42:46 | 0:19:48 | 0:13:21 | 0:06:27 | smithi | main | ubuntu | 18.04 | rados/cephadm/osds/{0-distro/ubuntu_18.04 0-nvme-loop 1-start 2-ops/rm-zap-add} | 2 | |
Failure Reason:
Command failed on smithi044 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:6ba5460a27104908466809fcb14c544d5dfc6522 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d8bd549a-f1db-11ec-8428-001a4aab830c -- bash -c \'set -e\nset -x\nceph orch ps\nceph orch device ls\nDEVID=$(ceph device ls | grep osd.1 | awk \'"\'"\'{print $1}\'"\'"\')\nHOST=$(ceph orch device ls | grep $DEVID | awk \'"\'"\'{print $1}\'"\'"\')\nDEV=$(ceph orch device ls | grep $DEVID | awk \'"\'"\'{print $2}\'"\'"\')\necho "host $HOST, dev $DEV, devid $DEVID"\nceph orch osd rm 1\nwhile ceph orch osd rm status | grep ^1 ; do sleep 5 ; done\nceph orch device zap $HOST $DEV --force\nceph orch daemon add osd $HOST:$DEV\nwhile ! ceph osd dump | grep osd.1 | grep up ; do sleep 5 ; done\n\'' |
||||||||||||||
pass | 6891181 | 2022-06-22 01:20:08 | 2022-06-22 03:23:18 | 2022-06-22 03:41:41 | 0:18:23 | 0:13:21 | 0:05:02 | smithi | main | centos | 8.stream | rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/classic task/test_cephadm} | 1 | |
fail | 6891182 | 2022-06-22 01:20:09 | 2022-06-22 03:23:19 | 2022-06-22 03:48:31 | 0:25:12 | 0:18:58 | 0:06:14 | smithi | main | centos | 8.stream | rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.stream_container_tools 1-bootstrap/octopus 1-start 2-nfs 3-upgrade-with-workload 4-final} | 2 | |
Failure Reason:
Command failed on smithi107 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image docker.io/ceph/ceph:v15 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid efe81e3e-f1db-11ec-8428-001a4aab830c -e sha1=6ba5460a27104908466809fcb14c544d5dfc6522 -- bash -c \'ceph versions | jq -e \'"\'"\'.overall | length == 1\'"\'"\'\'' |
||||||||||||||
fail | 6891183 | 2022-06-22 01:20:11 | 2022-06-22 03:23:19 | 2022-06-22 04:19:27 | 0:56:08 | 0:48:06 | 0:08:02 | smithi | main | centos | 8.stream | rados/singleton-bluestore/{all/cephtool mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{centos_8}} | 1 | |
Failure Reason:
Command failed (workunit test cephtool/test.sh) on smithi027 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=6ba5460a27104908466809fcb14c544d5dfc6522 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh' |
||||||||||||||
pass | 6891184 | 2022-06-22 01:20:12 | 2022-06-22 03:23:20 | 2022-06-22 03:58:35 | 0:35:15 | 0:28:42 | 0:06:33 | smithi | main | centos | 8.stream | rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{pg-warn whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-inline/yes 4-verify} 2-client 3-upgrade-with-workload 4-verify}} | 2 | |
fail | 6891185 | 2022-06-22 01:20:14 | 2022-06-22 03:23:30 | 2022-06-22 03:48:49 | 0:25:19 | 0:18:49 | 0:06:30 | smithi | main | centos | 8.stream | rados/cephadm/dashboard/{0-distro/centos_8.stream_container_tools task/test_e2e} | 2 | |
Failure Reason:
Command failed (workunit test cephadm/test_dashboard_e2e.sh) on smithi103 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=6ba5460a27104908466809fcb14c544d5dfc6522 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_dashboard_e2e.sh' |
||||||||||||||
fail | 6891186 | 2022-06-22 01:20:15 | 2022-06-22 03:23:30 | 2022-06-22 03:48:33 | 0:25:03 | 0:18:09 | 0:06:54 | smithi | main | centos | 8.stream | rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.stream_container_tools 1-bootstrap/16.2.4 1-start 2-nfs 3-upgrade-with-workload 4-final} | 2 | |
Failure Reason:
Command failed on smithi042 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image docker.io/ceph/ceph:v16.2.4 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f48ffdd0-f1db-11ec-8428-001a4aab830c -e sha1=6ba5460a27104908466809fcb14c544d5dfc6522 -- bash -c \'ceph versions | jq -e \'"\'"\'.overall | length == 1\'"\'"\'\'' |
||||||||||||||
pass | 6891187 | 2022-06-22 01:20:17 | 2022-06-22 03:23:31 | 2022-06-22 04:00:45 | 0:37:14 | 0:30:27 | 0:06:47 | smithi | main | centos | 8.stream | rados/cephadm/thrash/{0-distro/centos_8.stream_container_tools 1-start 2-thrash 3-tasks/small-objects fixed-2 msgr/async-v1only root} | 2 | |
fail | 6891188 | 2022-06-22 01:20:18 | 2022-06-22 03:23:51 | 2022-06-22 03:37:33 | 0:13:42 | 0:05:42 | 0:08:00 | smithi | main | ubuntu | 20.04 | rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 1-rook 2-workload/none 3-final cluster/3-node k8s/1.21 net/calico rook/master} | 3 | |
Failure Reason:
[Errno 2] Cannot find file on the remote 'ubuntu@smithi073.front.sepia.ceph.com': 'rook/cluster/examples/kubernetes/ceph/operator.yaml' |
||||||||||||||
pass | 6891189 | 2022-06-22 01:20:20 | 2022-06-22 03:24:02 | 2022-06-22 03:59:18 | 0:35:16 | 0:28:28 | 0:06:48 | smithi | main | centos | 8.stream | rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{pg-warn whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-inline/no 4-verify} 2-client 3-upgrade-with-workload 4-verify}} | 2 | |
fail | 6891190 | 2022-06-22 01:20:21 | 2022-06-22 03:24:22 | 2022-06-22 04:29:02 | 1:04:40 | 0:58:25 | 0:06:15 | smithi | main | rhel | 8.4 | rados/singleton-bluestore/{all/cephtool mon_election/classic msgr-failures/many msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{rhel_8}} | 1 | |
Failure Reason:
Command failed (workunit test cephtool/test.sh) on smithi007 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=6ba5460a27104908466809fcb14c544d5dfc6522 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh' |
||||||||||||||
fail | 6891191 | 2022-06-22 01:20:23 | 2022-06-22 03:24:22 | 2022-06-22 03:43:32 | 0:19:10 | 0:13:10 | 0:06:00 | smithi | main | centos | 8.stream | rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/classic task/test_cephadm} | 1 | |
Failure Reason:
SELinux denials found on ubuntu@smithi119.front.sepia.ceph.com: ['type=AVC msg=audit(1655869292.893:6420): avc: denied { ioctl } for pid=58683 comm="iptables" path="/var/lib/containers/storage/overlay/85709895cb054609ad2f87acf7d18f83d62a66139ca952b73adaa1ef2aa11e66/merged" dev="overlay" ino=3803124 scontext=system_u:system_r:iptables_t:s0 tcontext=system_u:object_r:container_file_t:s0:c1022,c1023 tclass=dir permissive=1', 'type=AVC msg=audit(1655869292.918:6421): avc: denied { ioctl } for pid=58686 comm="iptables" path="/var/lib/containers/storage/overlay/85709895cb054609ad2f87acf7d18f83d62a66139ca952b73adaa1ef2aa11e66/merged" dev="overlay" ino=3803124 scontext=system_u:system_r:iptables_t:s0 tcontext=system_u:object_r:container_file_t:s0:c1022,c1023 tclass=dir permissive=1'] |
||||||||||||||
fail | 6891192 | 2022-06-22 01:20:24 | 2022-06-22 03:24:33 | 2022-06-22 03:51:41 | 0:27:08 | 0:18:58 | 0:08:10 | smithi | main | centos | 8.stream | rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.stream_container_tools 1-bootstrap/octopus 1-start 2-nfs 3-upgrade-with-workload 4-final} | 2 | |
Failure Reason:
Command failed on smithi084 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image docker.io/ceph/ceph:v15 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 5eaef7ac-f1dc-11ec-8428-001a4aab830c -e sha1=6ba5460a27104908466809fcb14c544d5dfc6522 -- bash -c \'ceph versions | jq -e \'"\'"\'.overall | length == 1\'"\'"\'\'' |
||||||||||||||
fail | 6891193 | 2022-06-22 01:20:26 | 2022-06-22 03:26:33 | 2022-06-22 04:02:12 | 0:35:39 | 0:26:16 | 0:09:23 | smithi | main | ubuntu | 18.04 | rados/rook/smoke/{0-distro/ubuntu_18.04 0-kubeadm 1-rook 2-workload/radosbench 3-final cluster/3-node k8s/1.21 net/calico rook/1.6.2} | 3 | |
Failure Reason:
'check osd count' reached maximum tries (90) after waiting for 900 seconds |