Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
pass 7422331 2023-10-12 01:26:34 2023-10-12 01:31:24 2023-10-12 02:08:16 0:36:52 0:27:38 0:09:14 smithi main centos 9.stream crimson-rados/basic/{clusters/fixed-2 crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph tasks/rados_api_tests} 2
pass 7422332 2023-10-12 01:26:34 2023-10-12 01:31:24 2023-10-12 01:56:15 0:24:51 0:16:11 0:08:40 smithi main centos 9.stream crimson-rados/perf/{clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore settings/optimized workloads/fio_4K_rand_read} 1
fail 7422333 2023-10-12 01:26:35 2023-10-12 01:31:25 2023-10-12 04:52:09 3:20:44 3:12:49 0:07:55 smithi main centos 9.stream crimson-rados/rbd/{clusters/fixed-1 crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph tasks/rbd_api_tests} 1
Failure Reason:

Command failed (workunit test rbd/crimson/test_crimson_librbd.sh) on smithi037 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=6b24e899e33913e9eb55d38d1531a93a0747f52c TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 RBD_FEATURES=61 CRIMSON_COMPAT=1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rbd/crimson/test_crimson_librbd.sh'

pass 7422334 2023-10-12 01:26:36 2023-10-12 01:31:25 2023-10-12 01:56:25 0:25:00 0:15:54 0:09:06 smithi main centos 9.stream crimson-rados/singleton/{all/osd-backfill crimson-supported-all-distro/centos_latest crimson_qa_overrides objectstore/bluestore rados} 1
pass 7422335 2023-10-12 01:26:36 2023-10-12 01:31:35 2023-10-12 01:56:51 0:25:16 0:16:05 0:09:11 smithi main centos 9.stream crimson-rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore thrashers/default thrashosds-health workloads/admin_socket_objecter_requests} 2
pass 7422336 2023-10-12 01:26:37 2023-10-12 01:31:36 2023-10-12 01:57:49 0:26:13 0:16:10 0:10:03 smithi main centos 9.stream crimson-rados/perf/{clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore settings/optimized workloads/fio_4K_rand_rw} 1
dead 7422337 2023-10-12 01:26:38 2023-10-12 01:32:16 2023-10-12 13:43:13 12:10:57 smithi main centos 9.stream crimson-rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore thrashers/default thrashosds-health workloads/pool-snaps-few-objects} 2
Failure Reason:

hit max job timeout

pass 7422338 2023-10-12 01:26:38 2023-10-12 01:34:57 2023-10-12 01:57:56 0:22:59 0:14:51 0:08:08 smithi main centos 9.stream crimson-rados/perf/{clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore settings/optimized workloads/fio_4M_rand_read} 1
pass 7422339 2023-10-12 01:26:39 2023-10-12 01:34:57 2023-10-12 02:01:48 0:26:51 0:18:25 0:08:26 smithi main centos 9.stream crimson-rados/rbd/{clusters/fixed-1 crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph tasks/rbd_api_tests_old_format} 1
pass 7422340 2023-10-12 01:26:40 2023-10-12 01:34:58 2023-10-12 02:10:43 0:35:45 0:25:13 0:10:32 smithi main centos 9.stream crimson-rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore thrashers/default thrashosds-health workloads/radosbench-high-concurrency} 2
pass 7422341 2023-10-12 01:26:40 2023-10-12 01:35:18 2023-10-12 02:00:19 0:25:01 0:15:54 0:09:07 smithi main centos 9.stream crimson-rados/perf/{clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore settings/optimized workloads/fio_4M_rand_rw} 1
pass 7422342 2023-10-12 01:26:41 2023-10-12 01:35:18 2023-10-12 02:11:30 0:36:12 0:26:20 0:09:52 smithi main centos 9.stream crimson-rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore thrashers/default thrashosds-health workloads/radosbench} 2
pass 7422343 2023-10-12 01:26:42 2023-10-12 01:36:29 2023-10-12 02:02:10 0:25:41 0:16:15 0:09:26 smithi main centos 9.stream crimson-rados/perf/{clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore settings/optimized workloads/fio_4M_rand_write} 1
pass 7422344 2023-10-12 01:26:42 2023-10-12 01:37:39 2023-10-12 02:06:18 0:28:39 0:16:46 0:11:53 smithi main centos 9.stream crimson-rados/basic/{clusters/fixed-2 crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph tasks/rados_python} 2
pass 7422345 2023-10-12 01:26:43 2023-10-12 01:38:30 2023-10-12 01:59:40 0:21:10 0:12:22 0:08:48 smithi main centos 9.stream crimson-rados/rbd/{clusters/fixed-1 crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph tasks/rbd_cls_tests} 1
pass 7422346 2023-10-12 01:26:44 2023-10-12 01:38:50 2023-10-12 02:10:35 0:31:45 0:20:15 0:11:30 smithi main centos 9.stream crimson-rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore thrashers/default thrashosds-health workloads/small-objects-balanced} 2
fail 7422347 2023-10-12 01:26:44 2023-10-12 01:39:11 2023-10-12 02:04:26 0:25:15 0:12:50 0:12:25 smithi main centos 9.stream crimson-rados/perf/{clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore settings/optimized workloads/radosbench_4K_rand_read} 1
Failure Reason:

"2023-10-12T02:00:38.234396+0000 mon.a (mon.0) 152 : cluster [WRN] Health check failed: Degraded data redundancy: 199/9608 objects degraded (2.071%), 7 pgs degraded (PG_DEGRADED)" in cluster log

pass 7422348 2023-10-12 01:26:45 2023-10-12 01:41:31 2023-10-12 02:11:42 0:30:11 0:20:18 0:09:53 smithi main centos 9.stream crimson-rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore thrashers/default thrashosds-health workloads/small-objects-localized} 2
fail 7422349 2023-10-12 01:26:46 2023-10-12 01:42:42 2023-10-12 02:05:52 0:23:10 0:11:55 0:11:15 smithi main centos 9.stream crimson-rados/perf/{clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore settings/optimized workloads/radosbench_4K_seq_read} 1
Failure Reason:

"2023-10-12T02:03:41.397242+0000 mon.a (mon.0) 148 : cluster [WRN] Health check failed: Degraded data redundancy: 158/10832 objects degraded (1.459%), 8 pgs degraded (PG_DEGRADED)" in cluster log

pass 7422350 2023-10-12 01:26:46 2023-10-12 01:45:03 2023-10-12 02:05:32 0:20:29 0:11:20 0:09:09 smithi main centos 9.stream crimson-rados/rbd/{clusters/fixed-1 crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph tasks/rbd_lock_and_fence} 1
fail 7422351 2023-10-12 01:26:47 2023-10-12 01:45:03 2023-10-12 02:08:46 0:23:43 0:13:02 0:10:41 smithi main centos 9.stream crimson-rados/perf/{clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore settings/optimized workloads/radosbench_4M_rand_read} 1
Failure Reason:

"2023-10-12T02:05:02.641291+0000 mon.a (mon.0) 154 : cluster [WRN] Health check failed: Degraded data redundancy: 24/1504 objects degraded (1.596%), 9 pgs degraded (PG_DEGRADED)" in cluster log

pass 7422352 2023-10-12 01:26:48 2023-10-12 01:45:23 2023-10-12 02:18:21 0:32:58 0:22:05 0:10:53 smithi main centos 9.stream crimson-rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore thrashers/default thrashosds-health workloads/small-objects} 2
fail 7422353 2023-10-12 01:26:48 2023-10-12 01:47:14 2023-10-12 02:08:13 0:20:59 0:12:15 0:08:44 smithi main centos 9.stream crimson-rados/perf/{clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore settings/optimized workloads/radosbench_4M_seq_read} 1
Failure Reason:

"2023-10-12T02:06:11.427958+0000 mon.a (mon.0) 152 : cluster [WRN] Health check failed: Degraded data redundancy: 30/1704 objects degraded (1.761%), 6 pgs degraded (PG_DEGRADED)" in cluster log

pass 7422354 2023-10-12 01:26:49 2023-10-12 01:47:14 2023-10-12 02:24:30 0:37:16 0:28:15 0:09:01 smithi main centos 9.stream crimson-rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore thrashers/default thrashosds-health workloads/snaps-few-objects-balanced} 2
pass 7422355 2023-10-12 01:26:50 2023-10-12 01:47:15 2023-10-12 02:12:44 0:25:29 0:14:01 0:11:28 smithi main centos 9.stream crimson-rados/basic/{clusters/fixed-2 crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph tasks/readwrite} 2
pass 7422356 2023-10-12 01:26:50 2023-10-12 01:49:25 2023-10-12 02:22:53 0:33:28 0:22:50 0:10:38 smithi main centos 9.stream crimson-rados/rbd/{clusters/fixed-1 crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph tasks/rbd_python_api_tests} 1
fail 7422357 2023-10-12 01:26:51 2023-10-12 01:50:06 2023-10-12 02:10:38 0:20:32 0:12:00 0:08:32 smithi main centos 9.stream crimson-rados/perf/{clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore settings/optimized workloads/radosbench_4M_write} 1
Failure Reason:

"2023-10-12T02:09:27.648577+0000 mon.a (mon.0) 150 : cluster [WRN] Health check failed: Degraded data redundancy: 25/1648 objects degraded (1.517%), 4 pgs degraded (PG_DEGRADED)" in cluster log

pass 7422358 2023-10-12 01:26:52 2023-10-12 01:50:06 2023-10-12 02:29:37 0:39:31 0:29:19 0:10:12 smithi main centos 9.stream crimson-rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore thrashers/default thrashosds-health workloads/snaps-few-objects-localized} 2
pass 7422359 2023-10-12 01:26:52 2023-10-12 01:50:36 2023-10-12 02:25:12 0:34:36 0:22:37 0:11:59 smithi main centos 9.stream crimson-rados/perf/{clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore settings/optimized workloads/radosbench_omap_write} 1
pass 7422360 2023-10-12 01:26:53 2023-10-12 01:50:47 2023-10-12 02:31:44 0:40:57 0:29:47 0:11:10 smithi main centos 9.stream crimson-rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore thrashers/default thrashosds-health workloads/snaps-few-objects} 2
fail 7422361 2023-10-12 01:26:54 2023-10-12 01:52:27 2023-10-12 05:13:35 3:21:08 3:11:12 0:09:56 smithi main centos 9.stream crimson-rados/rbd/{clusters/fixed-1 crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph tasks/rbd_python_api_tests_old_format} 1
Failure Reason:

Command failed (workunit test rbd/test_librbd_python.sh) on smithi135 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=6b24e899e33913e9eb55d38d1531a93a0747f52c TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 CRIMSON_COMPAT=1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rbd/test_librbd_python.sh -m \'not skip_if_crimson\''

pass 7422362 2023-10-12 01:26:54 2023-10-12 01:52:28 2023-10-12 02:18:05 0:25:37 0:15:52 0:09:45 smithi main centos 9.stream crimson-rados/perf/{clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore settings/optimized workloads/sample_fio} 1
pass 7422363 2023-10-12 01:26:55 2023-10-12 01:52:48 2023-10-12 02:18:28 0:25:40 0:15:37 0:10:03 smithi main centos 9.stream crimson-rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore thrashers/default thrashosds-health workloads/write_fadvise_dontneed} 2
fail 7422364 2023-10-12 01:26:56 2023-10-12 01:53:38 2023-10-12 02:14:52 0:21:14 0:11:59 0:09:15 smithi main centos 9.stream crimson-rados/perf/{clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore settings/optimized workloads/sample_radosbench} 1
Failure Reason:

"2023-10-12T02:13:05.683962+0000 mon.a (mon.0) 151 : cluster [WRN] Health check failed: Degraded data redundancy: 166/9444 objects degraded (1.758%), 8 pgs degraded (PG_DEGRADED)" in cluster log