Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
pass 7077257 2022-10-21 15:24:24 2022-10-22 08:45:05 2022-10-22 09:20:04 0:34:59 0:27:49 0:07:10 smithi main rhel 8.6 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-active-recovery} 3-scrub-overrides/{default} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/osd-dispatch-delay msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{rhel_8} thrashers/none thrashosds-health workloads/radosbench-high-concurrency} 2
pass 7077258 2022-10-21 15:24:25 2022-10-22 08:45:06 2022-10-22 09:20:49 0:35:43 0:28:42 0:07:01 smithi main centos 8.stream rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/many msgr/async-v1only objectstore/bluestore-comp-zlib rados supported-random-distro$/{centos_8} tasks/rados_api_tests} 2
pass 7077259 2022-10-21 15:24:26 2022-10-22 08:45:56 2022-10-22 09:07:06 0:21:10 0:13:50 0:07:20 smithi main centos 8.stream rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/classic msgr-failures/few objectstore/bluestore-stupid rados recovery-overrides/{default} supported-random-distro$/{centos_8} thrashers/careful thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} 4
pass 7077260 2022-10-21 15:24:27 2022-10-22 08:46:07 2022-10-22 09:28:34 0:42:27 0:30:57 0:11:30 smithi main ubuntu 20.04 rados/singleton-nomsgr/{all/recovery-unfound-found mon_election/connectivity rados supported-random-distro$/{ubuntu_latest}} 1
pass 7077261 2022-10-21 15:24:28 2022-10-22 09:12:08 1167 smithi main rhel 8.6 rados/singleton/{all/pg-autoscaler-progress-off mon_election/connectivity msgr-failures/many msgr/async-v1only objectstore/bluestore-stupid rados supported-random-distro$/{rhel_8}} 2
pass 7077262 2022-10-21 15:24:29 2022-10-22 08:46:47 2022-10-22 09:32:49 0:46:02 0:35:55 0:10:07 smithi main ubuntu 20.04 rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/fast mon_election/classic msgr-failures/osd-delay objectstore/bluestore-hybrid rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{ubuntu_latest} thrashers/pggrow thrashosds-health workloads/ec-rados-plugin=clay-k=4-m=2} 2
pass 7077263 2022-10-21 15:24:30 2022-10-22 08:46:48 2022-10-22 09:08:37 0:21:49 0:14:48 0:07:01 smithi main centos 8.stream rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-comp-zlib rados supported-random-distro$/{centos_8} thrashers/careful thrashosds-health workloads/redirect} 2
fail 7077264 2022-10-21 15:24:31 2022-10-22 08:47:18 2022-10-22 09:02:34 0:15:16 0:05:27 0:09:49 smithi main ubuntu 20.04 rados/cephadm/smoke/{0-distro/ubuntu_20.04 0-nvme-loop agent/off fixed-2 mon_election/connectivity start} 2
Failure Reason:

Command failed on smithi081 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:b40a3bed7bdeff6224dd522c5b540bbe0d11c858 pull'

pass 7077265 2022-10-21 15:24:32 2022-10-22 08:47:19 2022-10-22 09:18:43 0:31:24 0:24:38 0:06:46 smithi main centos 8.stream rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/osd-dispatch-delay objectstore/filestore-xfs rados recovery-overrides/{more-async-recovery} supported-random-distro$/{centos_8} thrashers/none thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} 2
pass 7077266 2022-10-21 15:24:33 2022-10-22 08:47:39 2022-10-22 09:13:32 0:25:53 0:15:07 0:10:46 smithi main ubuntu 20.04 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/osd-delay msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{ubuntu_latest} thrashers/default thrashosds-health workloads/redirect_promote_tests} 2
pass 7077267 2022-10-21 15:24:35 2022-10-22 08:48:00 2022-10-22 09:13:57 0:25:57 0:18:02 0:07:55 smithi main centos 8.stream rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{centos_8} tasks/rados_cls_all} 2
pass 7077268 2022-10-21 15:24:36 2022-10-22 08:48:00 2022-10-22 09:24:01 0:36:01 0:28:23 0:07:38 smithi main rhel 8.6 rados/multimon/{clusters/21 mon_election/connectivity msgr-failures/many msgr/async no_pools objectstore/bluestore-bitmap rados supported-random-distro$/{rhel_8} tasks/mon_recovery} 3
fail 7077269 2022-10-21 15:24:37 2022-10-22 08:49:00 2022-10-22 09:06:15 0:17:15 0:07:09 0:10:06 smithi main ubuntu 20.04 rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 0-nvme-loop 1-rook 2-workload/radosbench cluster/1-node k8s/1.21 net/calico rook/1.7.2} 1
Failure Reason:

Command failed on smithi133 with status 1: 'kubectl create -f rook/cluster/examples/kubernetes/ceph/crds.yaml -f rook/cluster/examples/kubernetes/ceph/common.yaml -f operator.yaml'

pass 7077270 2022-10-21 15:24:38 2022-10-22 08:49:01 2022-10-22 09:22:32 0:33:31 0:26:18 0:07:13 smithi main centos 8.stream rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async-v1only objectstore/filestore-xfs rados tasks/rados_api_tests validater/lockdep} 2
pass 7077271 2022-10-21 15:24:39 2022-10-22 08:49:31 2022-10-22 09:20:56 0:31:25 0:22:37 0:08:48 smithi main ubuntu 20.04 rados/singleton/{all/radostool mon_election/classic msgr-failures/many msgr/async-v1only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{ubuntu_latest}} 1
pass 7077272 2022-10-21 15:24:40 2022-10-22 08:49:32 2022-10-22 09:16:43 0:27:11 0:14:33 0:12:38 smithi main ubuntu 20.04 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{default} 3-scrub-overrides/{default} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/fastclose msgr/async-v1only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{ubuntu_latest} thrashers/morepggrow thrashosds-health workloads/set-chunks-read} 2
pass 7077273 2022-10-21 15:24:41 2022-10-22 08:50:42 2022-10-22 09:20:30 0:29:48 0:21:47 0:08:01 smithi main centos 8.stream rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{centos_8} thrashers/none thrashosds-health workloads/small-objects-balanced} 2
fail 7077274 2022-10-21 15:24:42 2022-10-22 08:50:53 2022-10-22 09:04:06 0:13:13 0:07:03 0:06:10 smithi main centos 8.stream rados/cephadm/osds/{0-distro/centos_8.stream_container_tools_crun 0-nvme-loop 1-start 2-ops/rm-zap-wait} 2
Failure Reason:

Command failed on smithi130 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:b40a3bed7bdeff6224dd522c5b540bbe0d11c858 pull'

pass 7077275 2022-10-21 15:24:43 2022-10-22 08:51:03 2022-10-22 09:14:17 0:23:14 0:12:51 0:10:23 smithi main ubuntu 20.04 rados/objectstore/{backends/ceph_objectstore_tool supported-random-distro$/{ubuntu_latest}} 1
pass 7077276 2022-10-21 15:24:44 2022-10-22 08:51:04 2022-10-22 09:55:02 1:03:58 0:56:09 0:07:49 smithi main centos 8.stream rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-bitmap rados tasks/rados_api_tests validater/valgrind} 2
pass 7077277 2022-10-21 15:24:46 2022-10-22 08:51:14 2022-10-22 09:14:31 0:23:17 0:14:00 0:09:17 smithi main rhel 8.6 rados/singleton/{all/resolve_stuck_peering mon_election/classic msgr-failures/none msgr/async-v2only objectstore/bluestore-hybrid rados supported-random-distro$/{rhel_8}} 2
pass 7077278 2022-10-21 15:24:47 2022-10-22 08:53:25 2022-10-22 09:33:15 0:39:50 0:28:49 0:11:01 smithi main ubuntu 20.04 rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/connectivity msgr-failures/few objectstore/bluestore-comp-lz4 rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{ubuntu_latest} thrashers/pggrow thrashosds-health workloads/ec-rados-plugin=jerasure-k=4-m=2} 3
fail 7077279 2022-10-21 15:24:48 2022-10-22 08:53:35 2022-10-22 09:14:11 0:20:36 0:09:04 0:11:32 smithi main ubuntu 20.04 rados/singleton/{all/test_envlibrados_for_rocksdb mon_election/connectivity msgr-failures/none msgr/async-v2only objectstore/filestore-xfs rados supported-random-distro$/{ubuntu_latest}} 1
Failure Reason:

Command failed (workunit test rados/test_envlibrados_for_rocksdb.sh) on smithi157 with status 2: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=b40a3bed7bdeff6224dd522c5b540bbe0d11c858 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test_envlibrados_for_rocksdb.sh'

fail 7077280 2022-10-21 15:24:49 2022-10-22 08:54:16 2022-10-22 09:22:06 0:27:50 0:17:01 0:10:49 smithi main ubuntu 20.04 rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 0-nvme-loop 1-rook 2-workload/none cluster/3-node k8s/1.21 net/flannel rook/master} 3
Failure Reason:

'wait for toolbox' reached maximum tries (100) after waiting for 500 seconds

fail 7077281 2022-10-21 15:24:50 2022-10-22 08:54:36 2022-10-22 09:10:50 0:16:14 0:10:26 0:05:48 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools agent/on mon_election/connectivity task/test_cephadm} 1
Failure Reason:

Command failed (workunit test cephadm/test_cephadm.sh) on smithi036 with status 125: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=b40a3bed7bdeff6224dd522c5b540bbe0d11c858 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_cephadm.sh'

fail 7077282 2022-10-21 15:24:51 2022-10-22 08:54:37 2022-10-22 09:11:59 0:17:22 0:06:54 0:10:28 smithi main ubuntu 20.04 rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 0-nvme-loop 1-rook 2-workload/radosbench cluster/1-node k8s/1.21 net/host rook/1.7.2} 1
Failure Reason:

Command failed on smithi138 with status 1: 'kubectl create -f rook/cluster/examples/kubernetes/ceph/crds.yaml -f rook/cluster/examples/kubernetes/ceph/common.yaml -f operator.yaml'

fail 7077283 2022-10-21 15:24:52 2022-10-22 08:54:37 2022-10-22 12:36:19 3:41:42 3:35:11 0:06:31 smithi main centos 8.stream rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-comp-zstd rados tasks/rados_cls_all validater/valgrind} 2
Failure Reason:

Command failed (workunit test cls/test_cls_2pc_queue.sh) on smithi032 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=b40a3bed7bdeff6224dd522c5b540bbe0d11c858 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_2pc_queue.sh'

fail 7077284 2022-10-21 15:24:53 2022-10-22 08:54:37 2022-10-22 09:07:41 0:13:04 0:06:19 0:06:45 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/rhel_8.6_container_tools_3.0 agent/on mon_election/connectivity task/test_iscsi_pids_limit/{centos_8.stream_container_tools test_iscsi_pids_limit}} 1
Failure Reason:

Command failed on smithi107 with status 1: 'TESTDIR=/home/ubuntu/cephtest bash -s'

fail 7077285 2022-10-21 15:24:54 2022-10-22 08:54:48 2022-10-22 09:12:08 0:17:20 0:07:07 0:10:13 smithi main ubuntu 20.04 rados/cephadm/workunits/{0-distro/ubuntu_20.04 agent/on mon_election/connectivity task/test_orch_cli} 1
Failure Reason:

Command failed on smithi078 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:b40a3bed7bdeff6224dd522c5b540bbe0d11c858 pull'

pass 7077286 2022-10-21 15:24:56 2022-10-22 08:54:48 2022-10-22 11:26:23 2:31:35 2:11:29 0:20:06 smithi main centos 8.stream rados/objectstore/{backends/objectstore-bluestore-a supported-random-distro$/{centos_8}} 1
dead 7077287 2022-10-21 15:24:57 2022-10-22 08:55:29 2022-10-22 21:04:23 12:08:54 smithi main rhel 8.6 rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/fast mon_election/classic msgr-failures/osd-delay objectstore/bluestore-comp-zstd rados recovery-overrides/{more-active-recovery} supported-random-distro$/{rhel_8} thrashers/default thrashosds-health workloads/ec-rados-plugin=clay-k=4-m=2} 2
Failure Reason:

hit max job timeout

fail 7077288 2022-10-21 15:24:58 2022-10-22 08:55:49 2022-10-22 09:24:03 0:28:14 0:17:59 0:10:15 smithi main ubuntu 20.04 rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 0-nvme-loop 1-rook 2-workload/none cluster/3-node k8s/1.21 net/calico rook/master} 3
Failure Reason:

'wait for toolbox' reached maximum tries (100) after waiting for 500 seconds