Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
fail 6028696 2021-04-08 10:14:46 2021-04-08 10:16:10 2021-04-08 13:36:23 3:20:13 0:19:35 3:00:38 smithi master centos 8.3 rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-lz4 rados tasks/mon_recovery validater/lockdep} 2
Failure Reason:

Command failed on smithi052 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph osd dump --format=json'

fail 6028697 2021-04-08 10:14:47 2021-04-08 10:16:11 2021-04-08 14:43:18 4:27:07 3:13:16 1:13:51 smithi master centos 8.3 rados/monthrash/{ceph clusters/3-mons mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{centos_8} thrashers/force-sync-many workloads/rados_mon_workunits} 2
Failure Reason:

Command failed (workunit test mon/crush_ops.sh) on smithi176 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=4153f8c2d921d69fffb6073e49566a6e3e0e813e TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/mon/crush_ops.sh'

pass 6028698 2021-04-08 10:14:48 2021-04-08 10:16:12 2021-04-08 10:39:26 0:23:14 0:16:24 0:06:50 smithi master rhel 8.3 rados/multimon/{clusters/6 mon_election/classic msgr-failures/few msgr/async-v2only no_pools objectstore/bluestore-comp-lz4 rados supported-random-distro$/{rhel_8} tasks/mon_clock_with_skews} 2
dead 6028699 2021-04-08 10:14:49 2021-04-08 10:16:12 2021-04-08 22:26:30 12:10:18 smithi master rhel 8.3 rados/singleton/{all/ec-lost-unfound mon_election/classic msgr-failures/many msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{rhel_8}} 1
Failure Reason:

hit max job timeout

fail 6028700 2021-04-08 10:14:50 2021-04-08 10:16:13 2021-04-08 11:13:55 0:57:42 0:46:57 0:10:45 smithi master centos 8.2 rados/cephadm/thrash/{0-distro/centos_8.2_kubic_stable 1-start 2-thrash 3-tasks/radosbench fixed-2 msgr/async-v2only root} 2
Failure Reason:

wait_for_clean: failed before timeout expired

dead 6028701 2021-04-08 10:14:51 2021-04-08 10:16:53 2021-04-08 22:26:30 12:09:37 smithi master rhel 8.3 rados/singleton/{all/lost-unfound mon_election/connectivity msgr-failures/many msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{rhel_8}} 1
Failure Reason:

hit max job timeout

fail 6028702 2021-04-08 10:14:52 2021-04-08 10:16:54 2021-04-08 14:11:09 3:54:15 3:13:34 0:40:41 smithi master rhel 8.3 rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{rhel_8} tasks/rados_stress_watch} 2
Failure Reason:

Command failed (workunit test rados/stress_watch.sh) on smithi093 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=4153f8c2d921d69fffb6073e49566a6e3e0e813e TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/stress_watch.sh'

fail 6028703 2021-04-08 10:14:53 2021-04-08 10:16:54 2021-04-08 10:36:33 0:19:39 0:08:47 0:10:52 smithi master centos 8.3 rados/singleton/{all/max-pg-per-osd.from-replica mon_election/classic msgr-failures/many msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{centos_8}} 1
dead 6028704 2021-04-08 10:14:54 2021-04-08 10:16:55 2021-04-08 22:26:32 12:09:37 smithi master centos 8.3 rados/monthrash/{ceph clusters/9-mons mon_election/connectivity msgr-failures/mon-delay msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{centos_8} thrashers/sync-many workloads/rados_5925} 2
Failure Reason:

hit max job timeout

pass 6028705 2021-04-08 10:14:55 2021-04-08 10:18:15 2021-04-08 10:35:49 0:17:34 0:07:23 0:10:11 smithi master centos 8.3 rados/multimon/{clusters/3 mon_election/connectivity msgr-failures/many msgr/async-v2only no_pools objectstore/bluestore-comp-zstd rados supported-random-distro$/{centos_8} tasks/mon_clock_with_skews} 2
fail 6028706 2021-04-08 10:14:56 2021-04-08 10:18:16 2021-04-08 11:12:09 0:53:53 0:45:56 0:07:57 smithi master rhel 8.3 rados/cephadm/with-work/{0-distro/rhel_8.3_kubic_stable fixed-2 mode/root mon_election/connectivity msgr/async-v2only start tasks/rados_api_tests} 2
Failure Reason:

wait_for_clean: failed before timeout expired

pass 6028707 2021-04-08 10:14:56 2021-04-08 10:18:47 2021-04-08 10:41:46 0:22:59 0:14:12 0:08:47 smithi master centos 8.3 rados/singleton/{all/mon-config-keys mon_election/connectivity msgr-failures/many msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{centos_8}} 1
fail 6028708 2021-04-08 10:14:57 2021-04-08 10:18:47 2021-04-08 11:07:54 0:49:07 0:37:13 0:11:54 smithi master centos 8.2 rados/cephadm/thrash/{0-distro/centos_8.2_kubic_stable 1-start 2-thrash 3-tasks/rados_api_tests fixed-2 msgr/async-v2only root} 2
Failure Reason:

wait_for_clean: failed before timeout expired

fail 6028709 2021-04-08 10:14:58 2021-04-08 10:20:41 2021-04-08 13:11:01 2:50:20 0:25:14 2:25:06 smithi master rhel 8.3 rados/singleton/{all/pg-autoscaler-progress-off mon_election/connectivity msgr-failures/many msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{rhel_8}} 2
Failure Reason:

Command failed (workunit test mon/pg_autoscaler.sh) on smithi006 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=4153f8c2d921d69fffb6073e49566a6e3e0e813e TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/mon/pg_autoscaler.sh'

fail 6028710 2021-04-08 10:14:59 2021-04-08 10:21:22 2021-04-08 13:40:00 3:18:38 0:17:07 3:01:31 smithi master centos 8.3 rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-stupid rados tasks/mon_recovery validater/lockdep} 2
Failure Reason:

Command failed on smithi051 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph quorum_status'

fail 6028711 2021-04-08 10:15:00 2021-04-08 10:21:23 2021-04-08 15:11:47 4:50:24 3:22:31 1:27:53 smithi master rhel 8.3 rados/monthrash/{ceph clusters/3-mons mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{rhel_8} thrashers/many workloads/rados_mon_workunits} 2
Failure Reason:

Command failed (workunit test mon/crush_ops.sh) on smithi145 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=4153f8c2d921d69fffb6073e49566a6e3e0e813e TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/mon/crush_ops.sh'

fail 6028712 2021-04-08 10:15:01 2021-04-08 10:21:53 2021-04-08 11:10:16 0:48:23 0:36:47 0:11:36 smithi master centos 8.2 rados/cephadm/with-work/{0-distro/centos_8.2_kubic_stable fixed-2 mode/packaged mon_election/classic msgr/async-v2only start tasks/rados_python} 2
Failure Reason:

wait_for_clean: failed before timeout expired

fail 6028713 2021-04-08 10:15:01 2021-04-08 10:22:34 2021-04-08 10:41:53 0:19:19 0:09:45 0:09:34 smithi master centos 8.3 rados/singleton-bluestore/{all/cephtool mon_election/connectivity msgr-failures/none msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{centos_8}} 1
Failure Reason:

Command failed (workunit test cephtool/test.sh) on smithi172 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=4153f8c2d921d69fffb6073e49566a6e3e0e813e TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh'

fail 6028714 2021-04-08 10:15:02 2021-04-08 10:22:35 2021-04-08 10:51:47 0:29:12 0:22:45 0:06:27 smithi master rhel 8.3 rados/singleton/{all/test_envlibrados_for_rocksdb mon_election/classic msgr-failures/many msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{rhel_8}} 1
Failure Reason:

Command failed (workunit test rados/test_envlibrados_for_rocksdb.sh) on smithi104 with status 134: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=4153f8c2d921d69fffb6073e49566a6e3e0e813e TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test_envlibrados_for_rocksdb.sh'

fail 6028715 2021-04-08 10:15:03 2021-04-08 10:22:36 2021-04-08 13:13:13 2:50:37 1:04:09 1:46:28 smithi master centos 8.3 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{default} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{centos_8} thrashers/default thrashosds-health workloads/radosbench-high-concurrency} 2
Failure Reason:

reached maximum tries (500) after waiting for 3000 seconds

fail 6028716 2021-04-08 10:15:04 2021-04-08 10:23:27 2021-04-08 11:29:21 1:05:54 0:15:13 0:50:41 smithi master rhel 8.3 rados/singleton/{all/thrash-rados/{thrash-rados thrashosds-health} mon_election/connectivity msgr-failures/many msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{rhel_8}} 2
Failure Reason:

Command failed on smithi116 with status 1: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-osd -f --cluster ceph -i 2 2>> /var/log/ceph/ceph-osd.2.log'

fail 6028717 2021-04-08 10:15:05 2021-04-08 10:23:58 2021-04-08 11:11:19 0:47:21 0:36:58 0:10:23 smithi master centos 8.2 rados/cephadm/thrash/{0-distro/centos_8.2_kubic_stable 1-start 2-thrash 3-tasks/small-objects fixed-2 msgr/async-v2only root} 2
Failure Reason:

wait_for_clean: failed before timeout expired

dead 6028718 2021-04-08 10:15:06 2021-04-08 10:23:58 2021-04-08 22:34:27 12:10:29 smithi master centos 8.3 rados/monthrash/{ceph clusters/9-mons mon_election/connectivity msgr-failures/mon-delay msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{centos_8} thrashers/sync workloads/pool-create-delete} 2
Failure Reason:

hit max job timeout

pass 6028719 2021-04-08 10:15:07 2021-04-08 10:24:39 2021-04-08 10:48:06 0:23:27 0:16:24 0:07:03 smithi master rhel 8.3 rados/multimon/{clusters/9 mon_election/connectivity msgr-failures/many msgr/async-v2only no_pools objectstore/bluestore-comp-lz4 rados supported-random-distro$/{rhel_8} tasks/mon_clock_with_skews} 3
dead 6028720 2021-04-08 10:15:07 2021-04-08 10:24:40 2021-04-08 22:34:27 12:09:47 smithi master rhel 8.3 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-active-recovery} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{rhel_8} thrashers/none thrashosds-health workloads/redirect_promote_tests} 2
Failure Reason:

hit max job timeout

fail 6028721 2021-04-08 10:15:08 2021-04-08 10:25:27 2021-04-08 10:50:07 0:24:40 0:18:44 0:05:56 smithi master rhel 8.3 rados/singleton/{all/admin-socket mon_election/classic msgr-failures/many msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{rhel_8}} 1