Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
pass 6423235 2021-10-05 14:31:49 2021-10-05 21:22:35 2021-10-06 00:08:14 2:45:39 2:38:45 0:06:54 smithi master ubuntu 20.04 rados/standalone/{supported-random-distro$/{ubuntu_latest} workloads/scrub} 1
pass 6423236 2021-10-05 14:31:50 2021-10-05 21:22:35 2021-10-05 21:45:05 0:22:30 0:14:07 0:08:23 smithi master centos 8.2 rados/cephadm/workunits/{0-distro/centos_8.2_container_tools_3.0 mon_election/connectivity task/test_orch_cli} 1
dead 6423237 2021-10-05 14:31:50 2021-10-05 21:22:36 2021-10-06 09:43:37 12:21:01 smithi master ubuntu 20.04 rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 1-rook 2-workload/radosbench 3-final cluster/1-node k8s/1.21 net/calico rook/1.7.0} 1
Failure Reason:

hit max job timeout

pass 6423238 2021-10-05 14:31:51 2021-10-05 21:22:56 2021-10-05 21:48:19 0:25:23 0:15:36 0:09:47 smithi master centos 8.stream rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/many msgr/async objectstore/bluestore-hybrid rados supported-random-distro$/{centos_8.stream} tasks/rados_python} 2
pass 6423239 2021-10-05 14:31:52 2021-10-05 21:23:16 2021-10-05 22:40:59 1:17:43 1:07:04 0:10:39 smithi master centos 8.3 rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-bitmap rados tasks/rados_api_tests validater/valgrind} 2
fail 6423240 2021-10-05 14:31:53 2021-10-05 21:23:26 2021-10-05 21:52:25 0:28:59 0:19:33 0:09:26 smithi master centos 8.2 rados/dashboard/{centos_8.2_container_tools_3.0 debug/mgr mon_election/connectivity random-objectstore$/{filestore-xfs} tasks/e2e} 2
Failure Reason:

Command failed (workunit test cephadm/test_dashboard_e2e.sh) on smithi017 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=46ca647adc2b185994da3f30968749d1167d0e3a TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_dashboard_e2e.sh'

pass 6423241 2021-10-05 14:31:54 2021-10-05 21:23:27 2021-10-05 21:49:01 0:25:34 0:14:31 0:11:03 smithi master centos 8.3 rados/cephadm/smoke/{distro/centos_8.3_container_tools_3.0 fixed-2 mon_election/classic start} 2
pass 6423242 2021-10-05 14:31:55 2021-10-05 21:24:07 2021-10-05 21:45:48 0:21:41 0:14:19 0:07:22 smithi master ubuntu 20.04 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-active-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/osd-dispatch-delay msgr/async-v1only objectstore/bluestore-comp-zlib rados supported-random-distro$/{ubuntu_latest} thrashers/careful thrashosds-health workloads/cache} 2
pass 6423243 2021-10-05 14:31:55 2021-10-05 21:24:17 2021-10-05 21:59:12 0:34:55 0:24:21 0:10:34 smithi master centos 8.3 rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/normal mon_election/connectivity msgr-failures/osd-dispatch-delay objectstore/bluestore-comp-lz4 rados recovery-overrides/{more-active-recovery} supported-random-distro$/{centos_8} thrashers/morepggrow thrashosds-health workloads/ec-small-objects-fast-read} 2
pass 6423244 2021-10-05 14:31:56 2021-10-05 21:24:18 2021-10-05 21:48:11 0:23:53 0:13:30 0:10:23 smithi master centos 8.2 rados/cephadm/workunits/{0-distro/centos_8.2_container_tools_3.0 mon_election/connectivity task/test_orch_cli} 1
dead 6423245 2021-10-05 14:31:57 2021-10-05 21:25:18 2021-10-06 09:43:24 12:18:06 smithi master ubuntu 20.04 rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 1-rook 2-workload/radosbench 3-final cluster/1-node k8s/1.21 net/host rook/1.7.0} 1
Failure Reason:

hit max job timeout

fail 6423246 2021-10-05 14:31:58 2021-10-05 21:25:18 2021-10-06 02:18:58 4:53:40 4:42:57 0:10:43 smithi master centos 8.3 rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-comp-zstd rados tasks/rados_cls_all validater/valgrind} 2
Failure Reason:

Command failed (workunit test cls/test_cls_rgw.sh) on smithi016 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=46ca647adc2b185994da3f30968749d1167d0e3a TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_rgw.sh'

pass 6423247 2021-10-05 14:31:59 2021-10-05 21:25:49 2021-10-05 21:50:44 0:24:55 0:19:18 0:05:37 smithi master rhel 8.4 rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-comp-zlib rados supported-random-distro$/{rhel_8} tasks/repair_test} 2
pass 6423248 2021-10-05 14:31:59 2021-10-05 21:26:02 2021-10-05 21:58:17 0:32:15 0:19:06 0:13:09 smithi master centos 8.stream rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/osd-delay msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{centos_8.stream} thrashers/default thrashosds-health workloads/redirect} 2
pass 6423249 2021-10-05 14:32:00 2021-10-05 21:28:43 2021-10-06 01:57:07 4:28:24 4:20:50 0:07:34 smithi master ubuntu 20.04 rados/standalone/{supported-random-distro$/{ubuntu_latest} workloads/osd-backfill} 1
fail 6423250 2021-10-05 14:32:01 2021-10-05 21:29:04 2021-10-05 22:14:36 0:45:32 0:27:06 0:18:26 smithi master ubuntu 20.04 rados/objectstore/{backends/objectstore-bluestore-a supported-random-distro$/{ubuntu_latest}} 1
Failure Reason:

Command crashed: 'sudo TESTDIR=/home/ubuntu/cephtest bash -c \'mkdir $TESTDIR/archive/ostest && cd $TESTDIR/archive/ostest && ulimit -Sn 16384 && CEPH_ARGS="--no-log-to-stderr --log-file $TESTDIR/archive/ceph_test_objectstore.log --debug-bluestore 20" ceph_test_objectstore --gtest_filter=*/2:-*SyntheticMatrixC* --gtest_catch_exceptions=0\''

fail 6423251 2021-10-05 14:32:02 2021-10-05 21:29:04 2021-10-05 21:57:54 0:28:50 0:20:09 0:08:41 smithi master centos 8.2 rados/dashboard/{centos_8.2_container_tools_3.0 debug/mgr mon_election/classic random-objectstore$/{bluestore-hybrid} tasks/e2e} 2
Failure Reason:

Command failed (workunit test cephadm/test_dashboard_e2e.sh) on smithi033 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=46ca647adc2b185994da3f30968749d1167d0e3a TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_dashboard_e2e.sh'

pass 6423252 2021-10-05 14:32:03 2021-10-05 21:29:24 2021-10-06 00:50:39 3:21:15 3:13:12 0:08:03 smithi master ubuntu 20.04 rados/standalone/{supported-random-distro$/{ubuntu_latest} workloads/osd} 1