Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
fail 6954575 2022-08-01 13:22:29 2022-08-01 13:23:09 2022-08-01 13:58:12 0:35:03 0:28:37 0:06:26 smithi main centos 8.stream rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-lz4 rados tasks/mon_recovery validater/lockdep} 2
Failure Reason:

Command failed on smithi104 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph osd dump --format=json'

fail 6954576 2022-08-01 13:22:30 2022-08-01 13:23:09 2022-08-01 15:45:51 2:22:42 2:16:09 0:06:33 smithi main centos 8.stream rados/monthrash/{ceph clusters/3-mons mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{centos_8} thrashers/force-sync-many workloads/rados_mon_workunits} 2
Failure Reason:

Command failed (workunit test mon/caps.sh) on smithi047 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=8e7f49c256f8f4423de0179cd5ade14f6f211bd5 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/mon/caps.sh'

fail 6954577 2022-08-01 13:22:31 2022-08-01 13:23:10 2022-08-01 16:49:18 3:26:08 3:18:09 0:07:59 smithi main centos 8.stream rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/many msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{centos_8} tasks/rados_api_tests} 2
Failure Reason:

Command failed (workunit test rados/test.sh) on smithi167 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=8e7f49c256f8f4423de0179cd5ade14f6f211bd5 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test.sh'

fail 6954578 2022-08-01 13:22:32 2022-08-01 13:23:10 2022-08-01 13:39:19 0:16:09 0:08:22 0:07:47 smithi main rhel 8.5 rados/multimon/{clusters/6 mon_election/classic msgr-failures/few msgr/async-v2only no_pools objectstore/bluestore-comp-lz4 rados supported-random-distro$/{rhel_8} tasks/mon_clock_with_skews} 2
Failure Reason:

Command failed on smithi099 with status 1: 'sudo yum -y install ceph-radosgw'

fail 6954579 2022-08-01 13:22:33 2022-08-01 13:23:10 2022-08-01 14:42:16 1:19:06 1:12:47 0:06:19 smithi main centos 8.stream rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{default} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{centos_8} thrashers/default thrashosds-health workloads/radosbench-high-concurrency} 2
Failure Reason:

reached maximum tries (500) after waiting for 3000 seconds

fail 6954580 2022-08-01 13:22:35 2022-08-01 13:23:11 2022-08-01 16:44:32 3:21:21 3:15:19 0:06:02 smithi main centos 8.stream rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{centos_8} tasks/rados_stress_watch} 2
Failure Reason:

Command failed (workunit test rados/stress_watch.sh) on smithi064 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=8e7f49c256f8f4423de0179cd5ade14f6f211bd5 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/stress_watch.sh'

fail 6954581 2022-08-01 13:22:36 2022-08-01 13:23:11 2022-08-01 13:38:11 0:15:00 0:07:41 0:07:19 smithi main rhel 8.5 rados/singleton/{all/max-pg-per-osd.from-mon mon_election/connectivity msgr-failures/many msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{rhel_8}} 1
Failure Reason:

Command failed on smithi195 with status 1: 'sudo yum -y install ceph-radosgw'

fail 6954582 2022-08-01 13:22:37 2022-08-01 13:23:11 2022-08-01 13:38:18 0:15:07 0:08:12 0:06:55 smithi main rhel 8.5 rados/monthrash/{ceph clusters/9-mons mon_election/connectivity msgr-failures/mon-delay msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{rhel_8} thrashers/sync-many workloads/rados_5925} 2
Failure Reason:

Command failed on smithi105 with status 1: 'sudo yum -y install ceph-radosgw'

fail 6954583 2022-08-01 13:22:38 2022-08-01 13:23:12 2022-08-01 13:49:04 0:25:52 0:17:27 0:08:25 smithi main centos 8.stream rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-active-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/fastclose msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{centos_8} thrashers/none thrashosds-health workloads/redirect_promote_tests} 2
Failure Reason:

Command crashed: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --set_redirect --low_tier_pool low_tier --max-ops 4000 --objects 500 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op read 50 --op write 50 --op delete 10 --op tier_promote 30 --op write_excl 50 --pool unique_pool_0'

pass 6954584 2022-08-01 13:22:39 2022-08-01 13:23:12 2022-08-01 13:47:09 0:23:57 0:15:43 0:08:14 smithi main centos 8.stream rados/multimon/{clusters/3 mon_election/connectivity msgr-failures/many msgr/async-v2only no_pools objectstore/bluestore-comp-zstd rados supported-random-distro$/{centos_8} tasks/mon_clock_with_skews} 2
pass 6954585 2022-08-01 13:22:40 2022-08-01 13:23:12 2022-08-01 13:47:52 0:24:40 0:17:09 0:07:31 smithi main centos 8.stream rados/singleton/{all/mon-auth-caps mon_election/classic msgr-failures/many msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{centos_8}} 1
fail 6954586 2022-08-01 13:22:41 2022-08-01 13:23:13 2022-08-01 13:38:14 0:15:01 0:08:06 0:06:55 smithi main rhel 8.5 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{default} 3-scrub-overrides/{default} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{rhel_8} thrashers/default thrashosds-health workloads/write_fadvise_dontneed} 2
Failure Reason:

Command failed on smithi190 with status 1: 'sudo yum -y install ceph-radosgw'

fail 6954587 2022-08-01 13:22:42 2022-08-01 13:23:13 2022-08-01 13:59:10 0:35:57 0:27:57 0:08:00 smithi main centos 8.stream rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-stupid rados tasks/mon_recovery validater/lockdep} 2
Failure Reason:

Command failed on smithi077 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph osd last-stat-seq osd.6'

fail 6954588 2022-08-01 13:22:44 2022-08-01 13:23:13 2022-08-01 13:39:12 0:15:59 0:08:42 0:07:17 smithi main rhel 8.5 rados/multimon/{clusters/21 mon_election/classic msgr-failures/few msgr/async-v2only no_pools objectstore/bluestore-stupid rados supported-random-distro$/{rhel_8} tasks/mon_clock_with_skews} 3
Failure Reason:

Command failed on smithi086 with status 1: 'sudo yum -y install ceph-radosgw'

dead 6954589 2022-08-01 13:22:45 2022-08-01 13:23:14 2022-08-02 01:31:45 12:08:31 smithi main centos 8.stream rados/singleton/{all/random-eio mon_election/classic msgr-failures/many msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{centos_8}} 2
Failure Reason:

hit max job timeout

fail 6954590 2022-08-01 13:22:46 2022-08-01 13:23:14 2022-08-01 13:38:06 0:14:52 0:08:13 0:06:39 smithi main rhel 8.5 rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{rhel_8} tasks/repair_test} 2
Failure Reason:

Command failed on smithi176 with status 1: 'sudo yum -y install ceph-radosgw ceph-test ceph ceph-base cephadm ceph-immutable-object-cache ceph-mgr ceph-mgr-dashboard ceph-mgr-diskprediction-local ceph-mgr-rook ceph-mgr-cephadm ceph-fuse ceph-volume librados-devel libcephfs2 libcephfs-devel librados2 librbd1 python3-rados python3-rgw python3-cephfs python3-rbd rbd-fuse rbd-mirror rbd-nbd sqlite-devel sqlite-devel sqlite-devel sqlite-devel'

fail 6954591 2022-08-01 13:22:47 2022-08-01 13:23:15 2022-08-01 13:38:08 0:14:53 0:07:43 0:07:10 smithi main rhel 8.5 rados/singleton-bluestore/{all/cephtool mon_election/connectivity msgr-failures/none msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{rhel_8}} 1
Failure Reason:

Command failed on smithi202 with status 1: 'sudo yum -y install ceph-radosgw'

fail 6954592 2022-08-01 13:22:48 2022-08-01 13:23:15 2022-08-01 13:39:01 0:15:46 0:08:28 0:07:18 smithi main rhel 8.5 rados/singleton/{all/resolve_stuck_peering mon_election/connectivity msgr-failures/many msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{rhel_8}} 2
Failure Reason:

Command failed on smithi089 with status 1: 'sudo yum -y install ceph-radosgw'

pass 6954593 2022-08-01 13:22:49 2022-08-01 13:23:15 2022-08-01 13:57:02 0:33:47 0:27:00 0:06:47 smithi main centos 8.stream rados/singleton/{all/test_envlibrados_for_rocksdb mon_election/classic msgr-failures/many msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{centos_8}} 1
fail 6954594 2022-08-01 13:22:50 2022-08-01 13:23:15 2022-08-01 13:38:46 0:15:31 0:08:15 0:07:16 smithi main rhel 8.5 rados/singleton/{all/thrash-rados/{thrash-rados thrashosds-health} mon_election/connectivity msgr-failures/many msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{rhel_8}} 2
Failure Reason:

Command failed on smithi123 with status 1: 'sudo yum -y install ceph-radosgw'

fail 6954595 2022-08-01 13:22:51 2022-08-01 13:23:16 2022-08-01 13:39:27 0:16:11 0:08:19 0:07:52 smithi main rhel 8.5 rados/monthrash/{ceph clusters/9-mons mon_election/connectivity msgr-failures/mon-delay msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{rhel_8} thrashers/sync workloads/pool-create-delete} 2
Failure Reason:

Command failed on smithi158 with status 1: 'sudo yum -y install ceph-radosgw'

pass 6954596 2022-08-01 13:22:53 2022-08-01 13:23:16 2022-08-01 13:47:19 0:24:03 0:15:56 0:08:07 smithi main centos 8.stream rados/multimon/{clusters/9 mon_election/connectivity msgr-failures/many msgr/async-v2only no_pools objectstore/bluestore-comp-lz4 rados supported-random-distro$/{centos_8} tasks/mon_clock_with_skews} 3
fail 6954597 2022-08-01 13:22:54 2022-08-01 13:23:17 2022-08-01 16:49:01 3:25:44 3:17:53 0:07:51 smithi main centos 8.stream rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/many msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{centos_8} tasks/rados_api_tests} 2
Failure Reason:

Command failed (workunit test rados/test.sh) on smithi134 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=8e7f49c256f8f4423de0179cd5ade14f6f211bd5 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test.sh'

fail 6954598 2022-08-01 13:22:55 2022-08-01 13:23:17 2022-08-01 13:38:11 0:14:54 0:07:47 0:07:07 smithi main rhel 8.5 rados/singleton/{all/admin-socket mon_election/classic msgr-failures/many msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{rhel_8}} 1
Failure Reason:

Command failed on smithi135 with status 1: 'sudo yum -y install ceph-radosgw'