Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
pass 6382917 2021-09-10 14:27:16 2021-09-10 14:28:00 2021-09-10 14:52:58 0:24:58 0:13:19 0:11:39 smithi master centos 8.3 rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/classic msgr-failures/osd-delay objectstore/bluestore-stupid rados recovery-overrides/{default} supported-random-distro$/{centos_8} thrashers/mapgap thrashosds-health workloads/ec-rados-plugin=lrc-k=4-m=2-l=3} 3
pass 6382918 2021-09-10 14:27:17 2021-09-10 14:28:00 2021-09-10 14:54:15 0:26:15 0:19:09 0:07:06 smithi master rhel 8.4 rados/cephadm/smoke-roleless/{0-distro/rhel_8.4_container_tools_rhel8 1-start 2-services/client-keyring 3-final} 2
pass 6382919 2021-09-10 14:27:18 2021-09-10 14:28:00 2021-09-10 17:18:25 2:50:25 2:44:01 0:06:24 smithi master rhel 8.4 rados/standalone/{supported-random-distro$/{rhel_8} workloads/scrub} 1
pass 6382920 2021-09-10 14:27:19 2021-09-10 14:28:01 2021-09-10 15:07:01 0:39:00 0:31:54 0:07:06 smithi master rhel 8.4 rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/osd-dispatch-delay objectstore/filestore-xfs rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{rhel_8} thrashers/none thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} 2
pass 6382921 2021-09-10 14:27:20 2021-09-10 14:28:01 2021-09-10 14:55:16 0:27:15 0:16:42 0:10:33 smithi master ubuntu 20.04 rados/cephadm/smoke/{distro/ubuntu_20.04 fixed-2 mon_election/connectivity start} 2
pass 6382922 2021-09-10 14:27:21 2021-09-10 14:28:01 2021-09-10 15:05:05 0:37:04 0:27:26 0:09:38 smithi master centos 8.2 rados/cephadm/mgr-nfs-upgrade/{0-centos_8.2_container_tools_3.0 1-bootstrap/16.2.4 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
pass 6382923 2021-09-10 14:27:22 2021-09-10 14:28:01 2021-09-10 15:03:00 0:34:59 0:24:00 0:10:59 smithi master centos 8.3 rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async-v1only objectstore/filestore-xfs rados tasks/rados_api_tests validater/lockdep} 2
pass 6382924 2021-09-10 14:27:23 2021-09-10 14:28:03 2021-09-10 15:45:12 1:17:09 1:07:01 0:10:08 smithi master centos 8.3 rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-bitmap rados tasks/rados_api_tests validater/valgrind} 2
pass 6382925 2021-09-10 14:27:24 2021-09-10 14:28:03 2021-09-10 14:55:44 0:27:41 0:16:22 0:11:19 smithi master centos 8.3 rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-comp-lz4 rados tasks/rados_cls_all validater/lockdep} 2
fail 6382926 2021-09-10 14:27:25 2021-09-10 14:28:03 2021-09-10 14:56:02 0:27:59 0:16:38 0:11:21 smithi master centos 8.2 rados/dashboard/{centos_8.2_container_tools_3.0 debug/mgr mon_election/connectivity random-objectstore$/{filestore-xfs} tasks/e2e} 2
Failure Reason:

Command failed (workunit test cephadm/test_dashboard_e2e.sh) on smithi163 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=f5827c132aca9c4bc281f1e15eb3d5fa51e35f8b TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_dashboard_e2e.sh'

fail 6382927 2021-09-10 14:27:26 2021-09-10 14:28:03 2021-09-10 16:04:19 1:36:16 1:22:57 0:13:19 smithi master ubuntu 20.04 rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 1-rook 2-workload/none 3-final cluster/3-node k8s/1.21 net/calico rook/master} 3
Failure Reason:

'check osd count' reached maximum tries (90) after waiting for 900 seconds

pass 6382928 2021-09-10 14:27:27 2021-09-10 14:28:04 2021-09-10 17:15:00 2:46:56 2:36:15 0:10:41 smithi master centos 8.3 rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/normal mon_election/connectivity msgr-failures/few objectstore/filestore-xfs rados recovery-overrides/{more-async-recovery} supported-random-distro$/{centos_8} thrashers/fastread thrashosds-health workloads/ec-radosbench} 2
pass 6382929 2021-09-10 14:27:28 2021-09-10 14:28:05 2021-09-10 15:15:57 0:47:52 0:35:43 0:12:09 smithi master centos 8.2 rados/cephadm/thrash/{0-distro/centos_8.2_container_tools_3.0 1-start 2-thrash 3-tasks/snaps-few-objects fixed-2 msgr/async-v1only root} 2
pass 6382930 2021-09-10 14:27:29 2021-09-10 14:28:05 2021-09-10 14:57:36 0:29:31 0:23:01 0:06:30 smithi master rhel 8.4 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/osd-delay msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{rhel_8} thrashers/default thrashosds-health workloads/cache-agent-small} 2
pass 6382931 2021-09-10 14:27:30 2021-09-10 14:28:06 2021-09-10 14:53:47 0:25:41 0:15:31 0:10:10 smithi master centos 8.3 rados/cephadm/smoke-roleless/{0-distro/centos_8.3_container_tools_3.0 1-start 2-services/mirror 3-final} 2
pass 6382932 2021-09-10 14:27:31 2021-09-10 14:28:06 2021-09-10 15:12:28 0:44:22 0:35:02 0:09:20 smithi master centos 8.2 rados/cephadm/thrash/{0-distro/centos_8.2_container_tools_3.0 1-start 2-thrash 3-tasks/rados_api_tests fixed-2 msgr/async-v2only root} 2
pass 6382933 2021-09-10 14:27:32 2021-09-10 14:28:06 2021-09-10 14:53:51 0:25:45 0:19:34 0:06:11 smithi master rhel 8.4 rados/cephadm/smoke-roleless/{0-distro/rhel_8.4_container_tools_3.0 1-start 2-services/nfs-ingress-rgw 3-final} 2
pass 6382934 2021-09-10 14:27:32 2021-09-10 14:28:07 2021-09-10 14:56:11 0:28:04 0:17:06 0:10:58 smithi master ubuntu 20.04 rados/cephadm/smoke/{distro/ubuntu_20.04 fixed-2 mon_election/connectivity start} 2
dead 6382935 2021-09-10 14:27:33 2021-09-10 14:28:08 2021-09-10 14:47:30 0:19:22 smithi master ubuntu 20.04 rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 1-rook 2-workload/radosbench 3-final cluster/3-node k8s/1.21 net/calico rook/1.7.0} 3
Failure Reason:

Error reimaging machines: reached maximum tries (60) after waiting for 900 seconds

pass 6382936 2021-09-10 14:27:34 2021-09-10 14:32:29 2021-09-10 15:15:53 0:43:24 0:27:36 0:15:48 smithi master centos 8.2 rados/cephadm/mgr-nfs-upgrade/{0-centos_8.2_container_tools_3.0 1-bootstrap/16.2.4 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
pass 6382937 2021-09-10 14:27:35 2021-09-10 14:39:20 2021-09-10 14:57:28 0:18:08 0:08:31 0:09:37 smithi master ubuntu 20.04 rados/singleton-nomsgr/{all/cache-fs-trunc mon_election/connectivity rados supported-random-distro$/{ubuntu_latest}} 1
pass 6382938 2021-09-10 14:27:36 2021-09-10 14:40:20 2021-09-10 19:10:12 4:29:52 4:18:59 0:10:53 smithi master centos 8.3 rados/standalone/{supported-random-distro$/{centos_8} workloads/osd-backfill} 1
pass 6382939 2021-09-10 14:27:37 2021-09-10 14:40:21 2021-09-10 15:05:21 0:25:00 0:14:34 0:10:26 smithi master centos 8.3 rados/cephadm/smoke-roleless/{0-distro/centos_8.3_container_tools_3.0 1-start 2-services/iscsi 3-final} 2
fail 6382940 2021-09-10 14:27:38 2021-09-10 14:40:21 2021-09-10 15:19:26 0:39:05 0:19:52 0:19:13 smithi master centos 8.stream rados/objectstore/{backends/objectstore-bluestore-a supported-random-distro$/{centos_8.stream}} 1
Failure Reason:

Command crashed: 'sudo TESTDIR=/home/ubuntu/cephtest bash -c \'mkdir $TESTDIR/archive/ostest && cd $TESTDIR/archive/ostest && ulimit -Sn 16384 && CEPH_ARGS="--no-log-to-stderr --log-file $TESTDIR/archive/ceph_test_objectstore.log --debug-bluestore 20" ceph_test_objectstore --gtest_filter=*/2:-*SyntheticMatrixC* --gtest_catch_exceptions=0\''

pass 6382941 2021-09-10 14:27:39 2021-09-10 14:40:21 2021-09-10 15:35:55 0:55:34 0:44:30 0:11:04 smithi master ubuntu 20.04 rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/connectivity msgr-failures/osd-dispatch-delay objectstore/bluestore-stupid rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{ubuntu_latest} thrashers/pggrow thrashosds-health workloads/ec-rados-plugin=jerasure-k=4-m=2} 3
fail 6382942 2021-09-10 14:27:40 2021-09-10 14:40:32 2021-09-10 15:08:39 0:28:07 0:17:28 0:10:39 smithi master centos 8.2 rados/dashboard/{centos_8.2_container_tools_3.0 debug/mgr mon_election/classic random-objectstore$/{bluestore-comp-lz4} tasks/e2e} 2
Failure Reason:

Command failed (workunit test cephadm/test_dashboard_e2e.sh) on smithi092 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=f5827c132aca9c4bc281f1e15eb3d5fa51e35f8b TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_dashboard_e2e.sh'

pass 6382943 2021-09-10 14:27:41 2021-09-10 14:41:42 2021-09-10 15:14:29 0:32:47 0:23:25 0:09:22 smithi master centos 8.3 rados/cephadm/with-work/{0-distro/centos_8.3_container_tools_3.0 fixed-2 mode/root mon_election/connectivity msgr/async-v1only start tasks/rados_python} 2