Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
fail 6376960 2021-09-07 00:45:47 2021-09-07 00:46:36 2021-09-07 01:22:03 0:35:27 0:24:15 0:11:12 smithi master centos 8.3 rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-lz4 rados tasks/mon_recovery validater/lockdep} 2
Failure Reason:

Command failed on smithi041 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph osd dump --format=json'

fail 6376961 2021-09-07 00:45:48 2021-09-07 00:46:36 2021-09-07 01:05:23 0:18:47 0:13:03 0:05:44 smithi master rhel 8.3 rados/monthrash/{ceph clusters/3-mons mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{rhel_8} thrashers/force-sync-many workloads/rados_mon_workunits} 2
Failure Reason:

Command failed on smithi098 with status 1: 'sudo yum -y install ceph-mgr-cephadm'

pass 6376962 2021-09-07 00:45:49 2021-09-07 00:46:36 2021-09-07 01:09:58 0:23:22 0:12:06 0:11:16 smithi master centos 8.3 rados/multimon/{clusters/6 mon_election/classic msgr-failures/few msgr/async-v2only no_pools objectstore/bluestore-comp-lz4 rados supported-random-distro$/{centos_8} tasks/mon_clock_with_skews} 2
fail 6376963 2021-09-07 00:45:50 2021-09-07 00:46:37 2021-09-07 01:05:19 0:18:42 0:12:45 0:05:57 smithi master rhel 8.3 rados/singleton/{all/ec-lost-unfound mon_election/classic msgr-failures/many msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{rhel_8}} 1
Failure Reason:

Command failed on smithi134 with status 1: 'sudo yum -y install ceph-mgr-cephadm'

fail 6376964 2021-09-07 00:45:50 2021-09-07 00:46:37 2021-09-07 01:06:19 0:19:42 0:11:20 0:08:22 smithi master centos 8.2 rados/cephadm/thrash/{0-distro/centos_8.2_kubic_stable 1-start 2-thrash 3-tasks/radosbench fixed-2 msgr/async-v2only root} 2
Failure Reason:

Command failed on smithi190 with status 5: 'sudo systemctl stop ceph-55de12cc-0f77-11ec-8c25-001a4aab830c@mon.a'

dead 6376965 2021-09-07 00:45:51 2021-09-07 00:46:38 2021-09-07 12:55:08 12:08:30 smithi master centos 8.3 rados/singleton/{all/lost-unfound mon_election/connectivity msgr-failures/many msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{centos_8}} 1
Failure Reason:

hit max job timeout

fail 6376966 2021-09-07 00:45:52 2021-09-07 00:46:38 2021-09-07 02:36:45 1:50:07 1:38:03 0:12:04 smithi master centos 8.3 rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{centos_8} tasks/rados_stress_watch} 2
Failure Reason:

Command failed (workunit test rados/stress_watch.sh) on smithi012 with status 134: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=1b18e07603ebfbd16486528f11d2a761732592a5 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/stress_watch.sh'

pass 6376967 2021-09-07 00:45:53 2021-09-07 00:46:39 2021-09-07 01:10:26 0:23:47 0:12:03 0:11:44 smithi master centos 8.3 rados/multimon/{clusters/3 mon_election/connectivity msgr-failures/many msgr/async-v2only no_pools objectstore/bluestore-comp-zstd rados supported-random-distro$/{centos_8} tasks/mon_clock_with_skews} 2
fail 6376968 2021-09-07 00:45:53 2021-09-07 00:46:39 2021-09-07 01:10:36 0:23:57 0:16:19 0:07:38 smithi master rhel 8.3 rados/cephadm/with-work/{0-distro/rhel_8.3_kubic_stable fixed-2 mode/root mon_election/connectivity msgr/async-v2only start tasks/rados_api_tests} 2
Failure Reason:

Command failed on smithi084 with status 1: 'sudo yum -y install ceph-mgr-cephadm'

fail 6376969 2021-09-07 00:45:54 2021-09-07 00:46:39 2021-09-07 01:06:01 0:19:22 0:11:33 0:07:49 smithi master centos 8.2 rados/cephadm/thrash/{0-distro/centos_8.2_kubic_stable 1-start 2-thrash 3-tasks/rados_api_tests fixed-2 msgr/async-v2only root} 2
Failure Reason:

Command failed on smithi082 with status 5: 'sudo systemctl stop ceph-5d7e8d40-0f77-11ec-8c25-001a4aab830c@mon.a'

dead 6376970 2021-09-07 00:45:55 2021-09-07 00:46:40 2021-09-07 12:55:43 12:09:03 smithi master centos 8.3 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-active-recovery} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/osd-dispatch-delay msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{centos_8} thrashers/default thrashosds-health workloads/small-objects-balanced} 2
Failure Reason:

hit max job timeout

fail 6376971 2021-09-07 00:45:56 2021-09-07 00:46:40 2021-09-07 01:33:31 0:46:51 0:38:15 0:08:36 smithi master centos 8.3 rados/singleton/{all/osd-recovery-incomplete mon_election/classic msgr-failures/many msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{centos_8}} 1
Failure Reason:

timed out waiting for admin_socket to appear after osd.2 restart

fail 6376972 2021-09-07 00:45:56 2021-09-07 00:46:41 2021-09-07 00:59:24 0:12:43 0:06:14 0:06:29 smithi master rhel 8.3 rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/many msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{rhel_8} tasks/rados_workunit_loadgen_mix} 2
Failure Reason:

Command failed on smithi132 with status 1: 'sudo yum -y install ceph-radosgw ceph-test ceph ceph-base cephadm ceph-immutable-object-cache ceph-mgr ceph-mgr-dashboard ceph-mgr-diskprediction-local ceph-mgr-rook ceph-mgr-cephadm ceph-fuse librados-devel libcephfs2 libcephfs-devel librados2 librbd1 python3-rados python3-rgw python3-cephfs python3-rbd rbd-fuse rbd-mirror rbd-nbd sqlite-devel sqlite-devel sqlite-devel sqlite-devel'

fail 6376973 2021-09-07 00:45:57 2021-09-07 00:46:41 2021-09-07 01:12:21 0:25:40 0:13:40 0:12:00 smithi master centos 8.3 rados/singleton/{all/pg-autoscaler-progress-off mon_election/connectivity msgr-failures/many msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{centos_8}} 2
Failure Reason:

Command failed (workunit test mon/pg_autoscaler.sh) on smithi061 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=1b18e07603ebfbd16486528f11d2a761732592a5 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/mon/pg_autoscaler.sh'

fail 6376974 2021-09-07 00:45:58 2021-09-07 00:46:42 2021-09-07 01:22:57 0:36:15 0:23:15 0:13:00 smithi master centos 8.3 rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-stupid rados tasks/mon_recovery validater/lockdep} 2
Failure Reason:

Command failed on smithi076 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph pg dump --format=json'

fail 6376975 2021-09-07 00:45:58 2021-09-07 00:46:42 2021-09-07 01:04:01 0:17:19 0:10:53 0:06:26 smithi master centos 8.2 rados/cephadm/with-work/{0-distro/centos_8.2_kubic_stable fixed-2 mode/packaged mon_election/classic msgr/async-v2only start tasks/rados_python} 2
Failure Reason:

Command failed on smithi125 with status 5: 'sudo systemctl stop ceph-3e43dbec-0f77-11ec-8c25-001a4aab830c@mon.a'

fail 6376976 2021-09-07 00:45:59 2021-09-07 00:46:42 2021-09-07 01:10:24 0:23:42 0:12:00 0:11:42 smithi master centos 8.3 rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{centos_8} tasks/repair_test} 2
fail 6376977 2021-09-07 00:46:00 2021-09-07 00:46:43 2021-09-07 01:10:29 0:23:46 0:12:32 0:11:14 smithi master centos 8.3 rados/singleton-bluestore/{all/cephtool mon_election/connectivity msgr-failures/none msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{centos_8}} 1
Failure Reason:

Command failed (workunit test cephtool/test.sh) on smithi111 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=1b18e07603ebfbd16486528f11d2a761732592a5 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh'

fail 6376978 2021-09-07 00:46:00 2021-09-07 00:46:43 2021-09-07 01:06:03 0:19:20 0:13:18 0:06:02 smithi master rhel 8.3 rados/singleton/{all/test_envlibrados_for_rocksdb mon_election/classic msgr-failures/many msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{rhel_8}} 1
Failure Reason:

Command failed on smithi187 with status 1: 'sudo yum -y install ceph-mgr-cephadm'

fail 6376979 2021-09-07 00:46:01 2021-09-07 00:46:44 2021-09-07 01:09:14 0:22:30 0:14:37 0:07:53 smithi master rhel 8.3 rados/singleton/{all/thrash-rados/{thrash-rados thrashosds-health} mon_election/connectivity msgr-failures/many msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{rhel_8}} 2
Failure Reason:

Command failed on smithi039 with status 1: 'sudo yum -y install ceph-mgr-cephadm'

dead 6376980 2021-09-07 00:46:02 2021-09-07 00:46:44 2021-09-07 12:55:21 12:08:37 smithi master centos 8.3 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/osd-delay msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{centos_8} thrashers/none thrashosds-health workloads/dedup-io-mixed} 2
Failure Reason:

hit max job timeout

fail 6376981 2021-09-07 00:46:02 2021-09-07 00:46:45 2021-09-07 01:05:56 0:19:11 0:12:16 0:06:55 smithi master centos 8.2 rados/cephadm/thrash/{0-distro/centos_8.2_kubic_stable 1-start 2-thrash 3-tasks/small-objects fixed-2 msgr/async-v2only root} 2
Failure Reason:

Command failed on smithi007 with status 5: 'sudo systemctl stop ceph-6b52006e-0f77-11ec-8c25-001a4aab830c@mon.a'

fail 6376982 2021-09-07 00:46:03 2021-09-07 00:46:45 2021-09-07 01:00:41 0:13:56 0:06:03 0:07:53 smithi master rhel 8.3 rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/many msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{rhel_8} tasks/rados_api_tests} 2
Failure Reason:

Command failed on smithi164 with status 1: 'sudo yum -y install ceph-radosgw ceph-test ceph ceph-base cephadm ceph-immutable-object-cache ceph-mgr ceph-mgr-dashboard ceph-mgr-diskprediction-local ceph-mgr-rook ceph-mgr-cephadm ceph-fuse librados-devel libcephfs2 libcephfs-devel librados2 librbd1 python3-rados python3-rgw python3-cephfs python3-rbd rbd-fuse rbd-mirror rbd-nbd sqlite-devel sqlite-devel sqlite-devel sqlite-devel'

fail 6376983 2021-09-07 00:46:04 2021-09-07 00:46:45 2021-09-07 01:06:14 0:19:29 0:12:45 0:06:44 smithi master rhel 8.3 rados/singleton/{all/admin-socket mon_election/classic msgr-failures/many msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{rhel_8}} 1
Failure Reason:

Command failed on smithi183 with status 1: 'sudo yum -y install ceph-mgr-cephadm'