Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
pass 7428386 2023-10-15 14:39:38 2023-10-15 14:39:44 2023-10-15 15:16:55 0:37:11 0:27:21 0:09:50 smithi main centos 9.stream crimson-rados/basic/{clusters/fixed-2 crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph tasks/rados_api_tests} 2
pass 7428387 2023-10-15 14:39:39 2023-10-15 14:40:04 2023-10-15 15:08:05 0:28:01 0:17:12 0:10:49 smithi main centos 9.stream crimson-rados/perf/{clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore settings/optimized workloads/fio_4K_rand_read} 1
fail 7428388 2023-10-15 14:39:40 2023-10-15 14:40:35 2023-10-15 18:01:40 3:21:05 3:12:31 0:08:34 smithi main centos 9.stream crimson-rados/rbd/{clusters/fixed-1 crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph tasks/rbd_api_tests} 1
Failure Reason:

Command failed (workunit test rbd/crimson/test_crimson_librbd.sh) on smithi144 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=9b943423e52ce7834e53a46d2315dd8f44efe018 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 RBD_FEATURES=61 CRIMSON_COMPAT=1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rbd/crimson/test_crimson_librbd.sh'

pass 7428389 2023-10-15 14:39:41 2023-10-15 14:40:35 2023-10-15 15:04:14 0:23:39 0:11:42 0:11:57 smithi main centos 9.stream crimson-rados/singleton/{all/osd-backfill crimson-supported-all-distro/centos_latest crimson_qa_overrides objectstore/bluestore rados} 1
pass 7428390 2023-10-15 14:39:42 2023-10-15 14:41:06 2023-10-15 15:08:35 0:27:29 0:16:13 0:11:16 smithi main centos 9.stream crimson-rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore thrashers/default thrashosds-health workloads/admin_socket_objecter_requests} 2
pass 7428391 2023-10-15 14:39:43 2023-10-15 14:43:36 2023-10-15 15:08:16 0:24:40 0:15:44 0:08:56 smithi main centos 9.stream crimson-rados/perf/{clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore settings/optimized workloads/fio_4K_rand_rw} 1
pass 7428392 2023-10-15 14:39:43 2023-10-15 14:43:37 2023-10-15 15:20:48 0:37:11 0:27:32 0:09:39 smithi main centos 9.stream crimson-rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore thrashers/default thrashosds-health workloads/pool-snaps-few-objects} 2
pass 7428393 2023-10-15 14:39:44 2023-10-15 14:43:47 2023-10-15 15:11:51 0:28:04 0:15:06 0:12:58 smithi main centos 9.stream crimson-rados/perf/{clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore settings/optimized workloads/fio_4M_rand_read} 1
fail 7428394 2023-10-15 14:39:45 2023-10-15 14:46:18 2023-10-15 15:11:14 0:24:56 0:15:09 0:09:47 smithi main centos 9.stream crimson-rados/rbd/{clusters/fixed-1 crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph tasks/rbd_api_tests_old_format} 1
Failure Reason:

Command failed (workunit test rbd/crimson/test_crimson_librbd.sh) on smithi112 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=9b943423e52ce7834e53a46d2315dd8f44efe018 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 CRIMSON_COMPAT=1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rbd/crimson/test_crimson_librbd.sh'

pass 7428395 2023-10-15 14:39:46 2023-10-15 14:46:18 2023-10-15 15:23:25 0:37:07 0:25:27 0:11:40 smithi main centos 9.stream crimson-rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore thrashers/default thrashosds-health workloads/radosbench-high-concurrency} 2
pass 7428396 2023-10-15 14:39:47 2023-10-15 14:48:29 2023-10-15 15:13:34 0:25:05 0:15:37 0:09:28 smithi main centos 9.stream crimson-rados/perf/{clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore settings/optimized workloads/fio_4M_rand_rw} 1
pass 7428397 2023-10-15 14:39:48 2023-10-15 14:48:29 2023-10-15 15:26:54 0:38:25 0:26:37 0:11:48 smithi main centos 9.stream crimson-rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore thrashers/default thrashosds-health workloads/radosbench} 2
pass 7428398 2023-10-15 14:39:49 2023-10-15 14:48:59 2023-10-15 15:13:44 0:24:45 0:15:55 0:08:50 smithi main centos 9.stream crimson-rados/perf/{clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore settings/optimized workloads/fio_4M_rand_write} 1
pass 7428399 2023-10-15 14:39:49 2023-10-15 14:49:00 2023-10-15 15:17:44 0:28:44 0:17:02 0:11:42 smithi main centos 9.stream crimson-rados/basic/{clusters/fixed-2 crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph tasks/rados_python} 2
pass 7428400 2023-10-15 14:39:50 2023-10-15 14:50:20 2023-10-15 15:13:41 0:23:21 0:13:42 0:09:39 smithi main centos 9.stream crimson-rados/rbd/{clusters/fixed-1 crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph tasks/rbd_cls_tests} 1
pass 7428401 2023-10-15 14:39:51 2023-10-15 14:50:21 2023-10-15 15:21:46 0:31:25 0:21:06 0:10:19 smithi main centos 9.stream crimson-rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore thrashers/default thrashosds-health workloads/small-objects-balanced} 2
fail 7428402 2023-10-15 14:39:52 2023-10-15 14:50:21 2023-10-15 15:15:02 0:24:41 0:13:54 0:10:47 smithi main centos 9.stream crimson-rados/perf/{clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore settings/optimized workloads/radosbench_4K_rand_read} 1
Failure Reason:

"2023-10-15T15:12:14.063839+0000 mon.a (mon.0) 151 : cluster [WRN] Health check failed: Degraded data redundancy: 137/9438 objects degraded (1.452%), 8 pgs degraded (PG_DEGRADED)" in cluster log

pass 7428403 2023-10-15 14:39:53 2023-10-15 14:52:12 2023-10-15 15:21:01 0:28:49 0:20:32 0:08:17 smithi main centos 9.stream crimson-rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore thrashers/default thrashosds-health workloads/small-objects-localized} 2
fail 7428404 2023-10-15 14:39:54 2023-10-15 14:52:22 2023-10-15 15:13:30 0:21:08 0:12:29 0:08:39 smithi main centos 9.stream crimson-rados/perf/{clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore settings/optimized workloads/radosbench_4K_seq_read} 1
Failure Reason:

"2023-10-15T15:12:08.172436+0000 mon.a (mon.0) 151 : cluster [WRN] Health check failed: Degraded data redundancy: 80/9524 objects degraded (0.840%), 4 pgs degraded (PG_DEGRADED)" in cluster log

pass 7428405 2023-10-15 14:39:54 2023-10-15 14:52:22 2023-10-15 15:13:27 0:21:05 0:11:10 0:09:55 smithi main centos 9.stream crimson-rados/rbd/{clusters/fixed-1 crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph tasks/rbd_lock_and_fence} 1
fail 7428406 2023-10-15 14:39:55 2023-10-15 14:52:23 2023-10-15 15:15:24 0:23:01 0:13:31 0:09:30 smithi main centos 9.stream crimson-rados/perf/{clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore settings/optimized workloads/radosbench_4M_rand_read} 1
Failure Reason:

"2023-10-15T15:11:57.320409+0000 mon.a (mon.0) 155 : cluster [WRN] Health check failed: Degraded data redundancy: 18/1554 objects degraded (1.158%), 9 pgs degraded (PG_DEGRADED)" in cluster log

fail 7428407 2023-10-15 14:39:56 2023-10-15 14:52:23 2023-10-15 15:23:59 0:31:36 0:20:46 0:10:50 smithi main centos 9.stream crimson-rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore thrashers/default thrashosds-health workloads/small-objects} 2
Failure Reason:

Command failed on smithi114 with status 6: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph tell osd.2 flush_pg_stats'

fail 7428408 2023-10-15 14:39:57 2023-10-15 14:52:43 2023-10-15 15:16:07 0:23:24 0:12:22 0:11:02 smithi main centos 9.stream crimson-rados/perf/{clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore settings/optimized workloads/radosbench_4M_seq_read} 1
Failure Reason:

"2023-10-15T15:13:02.712386+0000 mon.a (mon.0) 152 : cluster [WRN] Health check failed: Degraded data redundancy: 14/1828 objects degraded (0.766%), 4 pgs degraded (PG_DEGRADED)" in cluster log

pass 7428409 2023-10-15 14:39:58 2023-10-15 14:52:44 2023-10-15 15:33:24 0:40:40 0:28:35 0:12:05 smithi main centos 9.stream crimson-rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore thrashers/default thrashosds-health workloads/snaps-few-objects-balanced} 2
pass 7428410 2023-10-15 14:39:59 2023-10-15 14:54:14 2023-10-15 15:21:23 0:27:09 0:14:18 0:12:51 smithi main centos 9.stream crimson-rados/basic/{clusters/fixed-2 crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph tasks/readwrite} 2
pass 7428411 2023-10-15 14:39:59 2023-10-15 14:56:05 2023-10-15 15:26:56 0:30:51 0:21:59 0:08:52 smithi main centos 9.stream crimson-rados/rbd/{clusters/fixed-1 crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph tasks/rbd_python_api_tests} 1
fail 7428412 2023-10-15 14:40:00 2023-10-15 14:56:05 2023-10-15 15:17:49 0:21:44 0:12:20 0:09:24 smithi main centos 9.stream crimson-rados/perf/{clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore settings/optimized workloads/radosbench_4M_write} 1
Failure Reason:

"2023-10-15T15:16:05.915473+0000 mon.a (mon.0) 151 : cluster [WRN] Health check failed: Degraded data redundancy: 16/1448 objects degraded (1.105%), 9 pgs degraded (PG_DEGRADED)" in cluster log

pass 7428413 2023-10-15 14:40:01 2023-10-15 14:57:16 2023-10-15 15:38:25 0:41:09 0:27:15 0:13:54 smithi main centos 9.stream crimson-rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore thrashers/default thrashosds-health workloads/snaps-few-objects-localized} 2
pass 7428414 2023-10-15 14:40:02 2023-10-15 15:00:36 2023-10-15 15:31:36 0:31:00 0:22:29 0:08:31 smithi main centos 9.stream crimson-rados/perf/{clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore settings/optimized workloads/radosbench_omap_write} 1
pass 7428415 2023-10-15 14:40:03 2023-10-15 15:00:37 2023-10-15 15:38:38 0:38:01 0:27:37 0:10:24 smithi main centos 9.stream crimson-rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore thrashers/default thrashosds-health workloads/snaps-few-objects} 2
fail 7428416 2023-10-15 14:40:04 2023-10-15 15:01:37 2023-10-15 18:21:08 3:19:31 3:10:56 0:08:35 smithi main centos 9.stream crimson-rados/rbd/{clusters/fixed-1 crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph tasks/rbd_python_api_tests_old_format} 1
Failure Reason:

Command failed (workunit test rbd/test_librbd_python.sh) on smithi107 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=9b943423e52ce7834e53a46d2315dd8f44efe018 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 CRIMSON_COMPAT=1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rbd/test_librbd_python.sh -m \'not skip_if_crimson\''

pass 7428417 2023-10-15 14:40:05 2023-10-15 15:29:27 945 smithi main centos 9.stream crimson-rados/perf/{clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore settings/optimized workloads/sample_fio} 1
pass 7428418 2023-10-15 14:40:05 2023-10-15 15:04:18 2023-10-15 15:31:05 0:26:47 0:15:11 0:11:36 smithi main centos 9.stream crimson-rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore thrashers/default thrashosds-health workloads/write_fadvise_dontneed} 2
fail 7428419 2023-10-15 14:40:06 2023-10-15 15:05:49 2023-10-15 15:26:56 0:21:07 0:12:12 0:08:55 smithi main centos 9.stream crimson-rados/perf/{clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore settings/optimized workloads/sample_radosbench} 1
Failure Reason:

"2023-10-15T15:25:54.232022+0000 mon.a (mon.0) 156 : cluster [WRN] Health check failed: Degraded data redundancy: 174/9500 objects degraded (1.832%), 10 pgs degraded (PG_DEGRADED)" in cluster log