Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
pass 6413337 2021-09-29 09:20:58 2021-09-29 09:22:48 2021-09-29 09:46:06 0:23:18 0:13:31 0:09:47 smithi master centos 8.3 rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-lz4 rados tasks/mon_recovery validater/lockdep} 2
fail 6413338 2021-09-29 09:20:59 2021-09-29 09:22:49 2021-09-29 09:39:47 0:16:58 0:11:12 0:05:46 smithi master rhel 8.3 rados/monthrash/{ceph clusters/3-mons mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{rhel_8} thrashers/force-sync-many workloads/rados_mon_workunits} 2
Failure Reason:

Command failed on smithi193 with status 1: 'sudo yum -y install ceph-mgr-cephadm'

pass 6413339 2021-09-29 09:21:00 2021-09-29 09:22:49 2021-09-29 09:41:53 0:19:04 0:08:54 0:10:10 smithi master centos 8.3 rados/multimon/{clusters/6 mon_election/classic msgr-failures/few msgr/async-v2only no_pools objectstore/bluestore-comp-lz4 rados supported-random-distro$/{centos_8} tasks/mon_clock_with_skews} 2
fail 6413340 2021-09-29 09:21:00 2021-09-29 09:22:50 2021-09-29 09:40:07 0:17:17 0:11:04 0:06:13 smithi master rhel 8.3 rados/singleton/{all/ec-lost-unfound mon_election/classic msgr-failures/many msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{rhel_8}} 1
Failure Reason:

Command failed on smithi117 with status 1: 'sudo yum -y install ceph-mgr-cephadm'

fail 6413341 2021-09-29 09:21:01 2021-09-29 09:23:11 2021-09-29 09:42:00 0:18:49 0:09:27 0:09:22 smithi master centos 8.2 rados/cephadm/thrash/{0-distro/centos_8.2_kubic_stable 1-start 2-thrash 3-tasks/radosbench fixed-2 msgr/async-v2only root} 2
Failure Reason:

Command failed on smithi064 with status 5: 'sudo systemctl stop ceph-3a4c7c44-2109-11ec-8c25-001a4aab830c@mon.a'

dead 6413342 2021-09-29 09:21:02 2021-09-29 09:23:12 2021-09-29 22:00:08 12:36:56 smithi master centos 8.3 rados/singleton/{all/lost-unfound mon_election/connectivity msgr-failures/many msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{centos_8}} 1
Failure Reason:

hit max job timeout

pass 6413343 2021-09-29 09:21:03 2021-09-29 09:23:22 2021-09-29 10:56:31 1:33:09 1:22:47 0:10:22 smithi master centos 8.3 rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{centos_8} tasks/rados_stress_watch} 2
pass 6413344 2021-09-29 09:21:04 2021-09-29 09:23:24 2021-09-29 09:43:17 0:19:53 0:08:35 0:11:18 smithi master centos 8.3 rados/multimon/{clusters/3 mon_election/connectivity msgr-failures/many msgr/async-v2only no_pools objectstore/bluestore-comp-zstd rados supported-random-distro$/{centos_8} tasks/mon_clock_with_skews} 2
fail 6413345 2021-09-29 09:21:04 2021-09-29 09:24:14 2021-09-29 09:45:28 0:21:14 0:13:37 0:07:37 smithi master rhel 8.3 rados/cephadm/with-work/{0-distro/rhel_8.3_kubic_stable fixed-2 mode/root mon_election/connectivity msgr/async-v2only start tasks/rados_api_tests} 2
Failure Reason:

Command failed on smithi125 with status 1: 'sudo yum -y install ceph-mgr-cephadm'

fail 6413346 2021-09-29 09:21:05 2021-09-29 09:24:25 2021-09-29 09:43:31 0:19:06 0:09:32 0:09:34 smithi master centos 8.2 rados/cephadm/thrash/{0-distro/centos_8.2_kubic_stable 1-start 2-thrash 3-tasks/rados_api_tests fixed-2 msgr/async-v2only root} 2
Failure Reason:

Command failed on smithi067 with status 5: 'sudo systemctl stop ceph-7b6bd490-2109-11ec-8c25-001a4aab830c@mon.a'

dead 6413347 2021-09-29 09:21:06 2021-09-29 09:24:26 2021-09-29 22:22:52 12:58:26 smithi master centos 8.3 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-active-recovery} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/osd-dispatch-delay msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{centos_8} thrashers/default thrashosds-health workloads/small-objects-balanced} 2
Failure Reason:

hit max job timeout

fail 6413348 2021-09-29 09:21:07 2021-09-29 09:24:36 2021-09-29 09:49:22 0:24:46 0:16:22 0:08:24 smithi master centos 8.3 rados/singleton/{all/osd-recovery-incomplete mon_election/classic msgr-failures/many msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{centos_8}} 1
Failure Reason:

timed out waiting for admin_socket to appear after osd.2 restart

fail 6413349 2021-09-29 09:21:08 2021-09-29 09:24:37 2021-09-29 09:37:57 0:13:20 0:06:18 0:07:02 smithi master rhel 8.3 rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/many msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{rhel_8} tasks/rados_workunit_loadgen_mix} 2
Failure Reason:

Command failed on smithi168 with status 1: 'sudo yum -y install ceph-radosgw ceph-test ceph ceph-base cephadm ceph-immutable-object-cache ceph-mgr ceph-mgr-dashboard ceph-mgr-diskprediction-local ceph-mgr-rook ceph-mgr-cephadm ceph-fuse librados-devel libcephfs2 libcephfs-devel librados2 librbd1 python3-rados python3-rgw python3-cephfs python3-rbd rbd-fuse rbd-mirror rbd-nbd sqlite-devel sqlite-devel sqlite-devel sqlite-devel'

fail 6413350 2021-09-29 09:21:08 2021-09-29 09:24:47 2021-09-29 09:45:52 0:21:05 0:10:02 0:11:03 smithi master centos 8.3 rados/singleton/{all/pg-autoscaler-progress-off mon_election/connectivity msgr-failures/many msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{centos_8}} 2
Failure Reason:

Command failed (workunit test mon/pg_autoscaler.sh) on smithi123 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=1b18e07603ebfbd16486528f11d2a761732592a5 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/mon/pg_autoscaler.sh'

fail 6413351 2021-09-29 09:21:09 2021-09-29 09:24:48 2021-09-29 09:51:31 0:26:43 0:17:41 0:09:02 smithi master centos 8.3 rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-stupid rados tasks/mon_recovery validater/lockdep} 2
Failure Reason:

Command failed on smithi049 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph pg dump --format=json'

fail 6413352 2021-09-29 09:21:10 2021-09-29 09:24:48 2021-09-29 09:43:40 0:18:52 0:09:12 0:09:40 smithi master centos 8.2 rados/cephadm/with-work/{0-distro/centos_8.2_kubic_stable fixed-2 mode/packaged mon_election/classic msgr/async-v2only start tasks/rados_python} 2
Failure Reason:

Command failed on smithi085 with status 5: 'sudo systemctl stop ceph-753020c2-2109-11ec-8c25-001a4aab830c@mon.a'

fail 6413353 2021-09-29 09:21:11 2021-09-29 09:24:50 2021-09-29 09:44:31 0:19:41 0:09:00 0:10:41 smithi master centos 8.3 rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{centos_8} tasks/repair_test} 2
fail 6413354 2021-09-29 09:21:12 2021-09-29 09:25:31 2021-09-29 09:44:50 0:19:19 0:10:21 0:08:58 smithi master centos 8.3 rados/singleton-bluestore/{all/cephtool mon_election/connectivity msgr-failures/none msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{centos_8}} 1
Failure Reason:

Command failed (workunit test cephtool/test.sh) on smithi103 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=1b18e07603ebfbd16486528f11d2a761732592a5 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh'

dead 6413355 2021-09-29 09:21:12 2021-09-29 09:25:31 2021-09-29 09:40:46 0:15:15 smithi master rhel 8.3 rados/singleton/{all/test_envlibrados_for_rocksdb mon_election/classic msgr-failures/many msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{rhel_8}} 1
Failure Reason:

Error reimaging machines: reached maximum tries (60) after waiting for 900 seconds

fail 6413356 2021-09-29 09:21:13 2021-09-29 09:25:32 2021-09-29 09:45:23 0:19:51 0:11:56 0:07:55 smithi master rhel 8.3 rados/singleton/{all/thrash-rados/{thrash-rados thrashosds-health} mon_election/connectivity msgr-failures/many msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{rhel_8}} 2
Failure Reason:

Command failed on smithi045 with status 1: 'sudo yum -y install ceph-mgr-cephadm'

fail 6413357 2021-09-29 09:21:14 2021-09-29 09:25:32 2021-09-29 09:55:01 0:29:29 0:19:10 0:10:19 smithi master centos 8.3 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/osd-delay msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{centos_8} thrashers/none thrashosds-health workloads/dedup-io-mixed} 2
Failure Reason:

reached maximum tries (90) after waiting for 540 seconds

fail 6413358 2021-09-29 09:21:15 2021-09-29 09:26:03 2021-09-29 09:44:54 0:18:51 0:09:03 0:09:48 smithi master centos 8.2 rados/cephadm/thrash/{0-distro/centos_8.2_kubic_stable 1-start 2-thrash 3-tasks/small-objects fixed-2 msgr/async-v2only root} 2
Failure Reason:

Command failed on smithi058 with status 5: 'sudo systemctl stop ceph-9b984b90-2109-11ec-8c25-001a4aab830c@mon.a'

fail 6413359 2021-09-29 09:21:16 2021-09-29 09:26:05 2021-09-29 09:40:16 0:14:11 0:06:15 0:07:56 smithi master rhel 8.3 rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/many msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{rhel_8} tasks/rados_api_tests} 2
Failure Reason:

Command failed on smithi114 with status 1: 'sudo yum -y install ceph-radosgw ceph-test ceph ceph-base cephadm ceph-immutable-object-cache ceph-mgr ceph-mgr-dashboard ceph-mgr-diskprediction-local ceph-mgr-rook ceph-mgr-cephadm ceph-fuse librados-devel libcephfs2 libcephfs-devel librados2 librbd1 python3-rados python3-rgw python3-cephfs python3-rbd rbd-fuse rbd-mirror rbd-nbd sqlite-devel sqlite-devel sqlite-devel'

fail 6413360 2021-09-29 09:21:16 2021-09-29 09:27:25 2021-09-29 09:46:02 0:18:37 0:11:46 0:06:51 smithi master rhel 8.3 rados/singleton/{all/admin-socket mon_election/classic msgr-failures/many msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{rhel_8}} 1
Failure Reason:

Command failed on smithi036 with status 1: 'sudo yum -y install ceph-mgr-cephadm'