Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
fail 6280803 2021-07-20 11:34:20 2021-07-20 11:34:20 2021-07-20 12:14:17 0:39:57 0:28:55 0:11:02 smithi master centos 8.3 rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-lz4 rados tasks/mon_recovery validater/lockdep} 2
Failure Reason:

Command failed on smithi081 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph tell osd.7 flush_pg_stats'

fail 6280804 2021-07-20 11:34:21 2021-07-20 11:34:21 2021-07-20 12:03:14 0:28:53 0:23:10 0:05:43 smithi master rhel 8.3 rados/monthrash/{ceph clusters/3-mons mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{rhel_8} thrashers/force-sync-many workloads/rados_mon_workunits} 2
Failure Reason:

No module named 'tasks.ceph'

pass 6280805 2021-07-20 11:34:21 2021-07-20 11:34:22 2021-07-20 11:59:37 0:25:15 0:14:21 0:10:54 smithi master centos 8.3 rados/multimon/{clusters/6 mon_election/classic msgr-failures/few msgr/async-v2only no_pools objectstore/bluestore-comp-lz4 rados supported-random-distro$/{centos_8} tasks/mon_clock_with_skews} 2
fail 6280806 2021-07-20 11:34:22 2021-07-20 11:34:22 2021-07-20 12:04:11 0:29:49 0:22:25 0:07:24 smithi master rhel 8.3 rados/singleton/{all/ec-lost-unfound mon_election/classic msgr-failures/many msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{rhel_8}} 1
Failure Reason:

No module named 'tasks.ceph'

fail 6280807 2021-07-20 11:34:23 2021-07-20 11:34:23 2021-07-20 12:31:36 0:57:13 0:47:27 0:09:46 smithi master centos 8.2 rados/cephadm/thrash/{0-distro/centos_8.2_kubic_stable 1-start 2-thrash 3-tasks/radosbench fixed-2 msgr/async-v2only root} 2
Failure Reason:

wait_for_clean: failed before timeout expired

dead 6280808 2021-07-20 11:34:24 2021-07-20 11:34:24 2021-07-20 23:42:58 12:08:34 smithi master centos 8.3 rados/singleton/{all/lost-unfound mon_election/connectivity msgr-failures/many msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{centos_8}} 1
Failure Reason:

hit max job timeout

fail 6280809 2021-07-20 11:34:24 2021-07-20 11:34:25 2021-07-20 15:01:41 3:27:16 3:16:55 0:10:21 smithi master centos 8.3 rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{centos_8} tasks/rados_stress_watch} 2
Failure Reason:

Command failed (workunit test rados/stress_watch.sh) on smithi033 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=1b18e07603ebfbd16486528f11d2a761732592a5 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/stress_watch.sh'

pass 6280810 2021-07-20 11:34:25 2021-07-20 11:34:25 2021-07-20 11:59:21 0:24:56 0:16:02 0:08:54 smithi master centos 8.3 rados/multimon/{clusters/3 mon_election/connectivity msgr-failures/many msgr/async-v2only no_pools objectstore/bluestore-comp-zstd rados supported-random-distro$/{centos_8} tasks/mon_clock_with_skews} 2
fail 6280811 2021-07-20 11:34:26 2021-07-20 11:34:26 2021-07-20 12:04:17 0:29:51 0:22:48 0:07:03 smithi master rhel 8.3 rados/cephadm/with-work/{0-distro/rhel_8.3_kubic_stable fixed-2 mode/root mon_election/connectivity msgr/async-v2only start tasks/rados_api_tests} 2
Failure Reason:

No module named 'tasks.cephadm'

fail 6280812 2021-07-20 11:34:27 2021-07-20 11:34:27 2021-07-20 12:31:11 0:56:44 0:47:16 0:09:28 smithi master centos 8.2 rados/cephadm/thrash/{0-distro/centos_8.2_kubic_stable 1-start 2-thrash 3-tasks/rados_api_tests fixed-2 msgr/async-v2only root} 2
Failure Reason:

wait_for_clean: failed before timeout expired

dead 6280813 2021-07-20 11:34:27 2021-07-20 11:34:28 2021-07-20 23:43:30 12:09:02 smithi master centos 8.3 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-active-recovery} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/osd-dispatch-delay msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{centos_8} thrashers/default thrashosds-health workloads/small-objects-balanced} 2
Failure Reason:

hit max job timeout

fail 6280814 2021-07-20 11:34:28 2021-07-20 11:34:28 2021-07-20 12:28:16 0:53:48 0:43:13 0:10:35 smithi master centos 8.3 rados/singleton/{all/osd-recovery-incomplete mon_election/classic msgr-failures/many msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{centos_8}} 1
Failure Reason:

timed out waiting for admin_socket to appear after osd.2 restart

pass 6280815 2021-07-20 11:34:29 2021-07-20 11:34:29 2021-07-20 14:25:26 2:50:57 2:44:39 0:06:18 smithi master rhel 8.3 rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/many msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{rhel_8} tasks/rados_workunit_loadgen_mix} 2
fail 6280816 2021-07-20 11:34:30 2021-07-20 11:34:30 2021-07-20 12:26:18 0:51:48 0:40:50 0:10:58 smithi master centos 8.3 rados/singleton/{all/pg-autoscaler-progress-off mon_election/connectivity msgr-failures/many msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{centos_8}} 2
Failure Reason:

Command failed (workunit test mon/pg_autoscaler.sh) on smithi012 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=1b18e07603ebfbd16486528f11d2a761732592a5 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/mon/pg_autoscaler.sh'

fail 6280817 2021-07-20 11:34:31 2021-07-20 11:34:31 2021-07-20 12:12:05 0:37:34 0:26:35 0:10:59 smithi master centos 8.3 rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-stupid rados tasks/mon_recovery validater/lockdep} 2
Failure Reason:

Command failed on smithi145 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph osd dump --format=json'

fail 6280818 2021-07-20 11:34:31 2021-07-20 11:34:32 2021-07-20 12:32:13 0:57:41 0:46:11 0:11:30 smithi master centos 8.2 rados/cephadm/with-work/{0-distro/centos_8.2_kubic_stable fixed-2 mode/packaged mon_election/classic msgr/async-v2only start tasks/rados_python} 2
Failure Reason:

wait_for_clean: failed before timeout expired

fail 6280819 2021-07-20 11:34:32 2021-07-20 11:34:32 2021-07-20 12:00:08 0:25:36 0:15:26 0:10:10 smithi master centos 8.3 rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{centos_8} tasks/repair_test} 2
fail 6280820 2021-07-20 11:34:33 2021-07-20 11:34:33 2021-07-20 11:57:09 0:22:36 0:13:55 0:08:41 smithi master centos 8.3 rados/singleton-bluestore/{all/cephtool mon_election/connectivity msgr-failures/none msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{centos_8}} 1
Failure Reason:

Command failed (workunit test cephtool/test.sh) on smithi060 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=1b18e07603ebfbd16486528f11d2a761732592a5 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh'

fail 6280821 2021-07-20 11:34:34 2021-07-20 11:34:34 2021-07-20 12:04:14 0:29:40 0:22:05 0:07:35 smithi master rhel 8.3 rados/singleton/{all/test_envlibrados_for_rocksdb mon_election/classic msgr-failures/many msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{rhel_8}} 1
Failure Reason:

No module named 'tasks.ceph'

fail 6280822 2021-07-20 11:34:34 2021-07-20 11:34:35 2021-07-20 12:04:21 0:29:46 0:22:22 0:07:24 smithi master rhel 8.3 rados/singleton/{all/thrash-rados/{thrash-rados thrashosds-health} mon_election/connectivity msgr-failures/many msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{rhel_8}} 2
Failure Reason:

No module named 'tasks.ceph'

dead 6280823 2021-07-20 11:34:35 2021-07-20 11:34:35 2021-07-20 23:44:25 12:09:50 smithi master centos 8.3 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/osd-delay msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{centos_8} thrashers/none thrashosds-health workloads/dedup-io-mixed} 2
Failure Reason:

hit max job timeout

fail 6280824 2021-07-20 11:34:36 2021-07-20 11:34:36 2021-07-20 12:30:04 0:55:28 0:45:57 0:09:31 smithi master centos 8.2 rados/cephadm/thrash/{0-distro/centos_8.2_kubic_stable 1-start 2-thrash 3-tasks/small-objects fixed-2 msgr/async-v2only root} 2
Failure Reason:

wait_for_clean: failed before timeout expired

fail 6280825 2021-07-20 11:34:37 2021-07-20 11:34:37 2021-07-20 15:02:26 3:27:49 3:20:53 0:06:56 smithi master rhel 8.3 rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/many msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{rhel_8} tasks/rados_api_tests} 2
Failure Reason:

Command failed (workunit test rados/test.sh) on smithi006 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=1b18e07603ebfbd16486528f11d2a761732592a5 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test.sh'

fail 6280826 2021-07-20 11:34:37 2021-07-20 11:34:38 2021-07-20 12:03:20 0:28:42 0:22:32 0:06:10 smithi master rhel 8.3 rados/singleton/{all/admin-socket mon_election/classic msgr-failures/many msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{rhel_8}} 1
Failure Reason:

No module named 'tasks.ceph'