Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
pass 7441956 2023-10-31 11:40:48 2023-10-31 11:45:31 2023-10-31 12:21:38 0:36:07 0:26:03 0:10:04 smithi main centos 9.stream crimson-rados/basic/{clusters/fixed-2 crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph tasks/rados_api_tests} 2
pass 7441957 2023-10-31 11:40:49 2023-10-31 11:45:31 2023-10-31 12:10:45 0:25:14 0:15:48 0:09:26 smithi main centos 9.stream crimson-rados/perf/{clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore settings/optimized workloads/fio_4K_rand_read} 1
fail 7441958 2023-10-31 11:40:50 2023-10-31 11:45:31 2023-10-31 15:07:51 3:22:20 3:11:59 0:10:21 smithi main centos 9.stream crimson-rados/rbd/{clusters/fixed-1 crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph tasks/rbd_api_tests} 1
Failure Reason:

Command failed (workunit test rbd/crimson/test_crimson_librbd.sh) on smithi107 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=21548fe806cf259deac1421530d5ce720be17997 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 RBD_FEATURES=61 CRIMSON_COMPAT=1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rbd/crimson/test_crimson_librbd.sh'

pass 7441959 2023-10-31 11:40:51 2023-10-31 11:46:22 2023-10-31 12:11:07 0:24:45 0:12:06 0:12:39 smithi main centos 9.stream crimson-rados/singleton/{all/osd-backfill crimson-supported-all-distro/centos_latest crimson_qa_overrides objectstore/bluestore rados} 1
pass 7441960 2023-10-31 11:40:51 2023-10-31 11:46:22 2023-10-31 12:19:10 0:32:48 0:15:51 0:16:57 smithi main centos 9.stream crimson-rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore thrashers/default thrashosds-health workloads/admin_socket_objecter_requests} 2
pass 7441961 2023-10-31 11:40:52 2023-10-31 11:54:14 2023-10-31 12:18:55 0:24:41 0:16:11 0:08:30 smithi main centos 9.stream crimson-rados/perf/{clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore settings/optimized workloads/fio_4K_rand_rw} 1
pass 7441962 2023-10-31 11:40:53 2023-10-31 11:54:14 2023-10-31 12:31:37 0:37:23 0:27:53 0:09:30 smithi main centos 9.stream crimson-rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore thrashers/default thrashosds-health workloads/pool-snaps-few-objects} 2
fail 7441963 2023-10-31 11:40:54 2023-10-31 11:54:54 2023-10-31 12:21:06 0:26:12 0:15:05 0:11:07 smithi main centos 9.stream crimson-rados/perf/{clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore settings/optimized workloads/fio_4M_rand_read} 1
Failure Reason:

"2023-10-31T12:15:45.475435+0000 mon.a (mon.0) 131 : cluster [WRN] Health check failed: Reduced data availability: 12 pgs inactive (PG_AVAILABILITY)" in cluster log

fail 7441964 2023-10-31 11:40:55 2023-10-31 11:56:15 2023-10-31 15:17:08 3:20:53 3:11:37 0:09:16 smithi main centos 9.stream crimson-rados/rbd/{clusters/fixed-1 crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph tasks/rbd_api_tests_old_format} 1
Failure Reason:

Command failed (workunit test rbd/crimson/test_crimson_librbd.sh) on smithi099 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=21548fe806cf259deac1421530d5ce720be17997 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 CRIMSON_COMPAT=1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rbd/crimson/test_crimson_librbd.sh'

pass 7441965 2023-10-31 11:40:56 2023-10-31 11:56:15 2023-10-31 12:35:26 0:39:11 0:25:13 0:13:58 smithi main centos 9.stream crimson-rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore thrashers/default thrashosds-health workloads/radosbench-high-concurrency} 2
pass 7441966 2023-10-31 11:40:56 2023-10-31 12:00:16 2023-10-31 12:25:15 0:24:59 0:15:47 0:09:12 smithi main centos 9.stream crimson-rados/perf/{clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore settings/optimized workloads/fio_4M_rand_rw} 1
pass 7441967 2023-10-31 11:40:57 2023-10-31 12:00:16 2023-10-31 12:37:17 0:37:01 0:26:24 0:10:37 smithi main centos 9.stream crimson-rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore thrashers/default thrashosds-health workloads/radosbench} 2
pass 7441968 2023-10-31 11:40:58 2023-10-31 12:02:07 2023-10-31 12:26:58 0:24:51 0:15:47 0:09:04 smithi main centos 9.stream crimson-rados/perf/{clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore settings/optimized workloads/fio_4M_rand_write} 1
pass 7441969 2023-10-31 11:40:59 2023-10-31 12:02:07 2023-10-31 12:30:41 0:28:34 0:17:03 0:11:31 smithi main centos 9.stream crimson-rados/basic/{clusters/fixed-2 crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph tasks/rados_python} 2
pass 7441970 2023-10-31 11:41:00 2023-10-31 12:03:08 2023-10-31 12:23:56 0:20:48 0:12:11 0:08:37 smithi main centos 9.stream crimson-rados/rbd/{clusters/fixed-1 crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph tasks/rbd_cls_tests} 1
pass 7441971 2023-10-31 11:41:01 2023-10-31 12:03:08 2023-10-31 12:32:45 0:29:37 0:20:04 0:09:33 smithi main centos 9.stream crimson-rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore thrashers/default thrashosds-health workloads/small-objects-balanced} 2
fail 7441972 2023-10-31 11:41:01 2023-10-31 12:03:39 2023-10-31 12:29:14 0:25:35 0:13:16 0:12:19 smithi main centos 9.stream crimson-rados/perf/{clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore settings/optimized workloads/radosbench_4K_rand_read} 1
Failure Reason:

"2023-10-31T12:25:38.278705+0000 mon.a (mon.0) 156 : cluster [WRN] Health check failed: Degraded data redundancy: 109/9044 objects degraded (1.205%), 8 pgs degraded (PG_DEGRADED)" in cluster log

pass 7441973 2023-10-31 11:41:02 2023-10-31 12:06:09 2023-10-31 12:42:30 0:36:21 0:20:29 0:15:52 smithi main centos 9.stream crimson-rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore thrashers/default thrashosds-health workloads/small-objects-localized} 2
fail 7441974 2023-10-31 11:41:03 2023-10-31 12:10:00 2023-10-31 12:31:05 0:21:05 0:12:35 0:08:30 smithi main centos 9.stream crimson-rados/perf/{clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore settings/optimized workloads/radosbench_4K_seq_read} 1
Failure Reason:

"2023-10-31T12:29:46.967271+0000 mon.a (mon.0) 152 : cluster [WRN] Health check failed: Degraded data redundancy: 104/9992 objects degraded (1.041%), 5 pgs degraded (PG_DEGRADED)" in cluster log

pass 7441975 2023-10-31 11:41:04 2023-10-31 12:10:01 2023-10-31 12:32:05 0:22:04 0:10:45 0:11:19 smithi main centos 9.stream crimson-rados/rbd/{clusters/fixed-1 crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph tasks/rbd_lock_and_fence} 1
fail 7441976 2023-10-31 11:41:05 2023-10-31 12:10:51 2023-10-31 12:37:56 0:27:05 0:13:18 0:13:47 smithi main centos 9.stream crimson-rados/perf/{clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore settings/optimized workloads/radosbench_4M_rand_read} 1
Failure Reason:

"2023-10-31T12:33:49.300331+0000 mon.a (mon.0) 148 : cluster [WRN] Health check failed: Degraded data redundancy: 3/1716 objects degraded (0.175%), 2 pgs degraded (PG_DEGRADED)" in cluster log

pass 7441977 2023-10-31 11:41:05 2023-10-31 12:11:12 2023-10-31 12:40:55 0:29:43 0:20:06 0:09:37 smithi main centos 9.stream crimson-rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore thrashers/default thrashosds-health workloads/small-objects} 2
fail 7441978 2023-10-31 11:41:06 2023-10-31 12:11:42 2023-10-31 12:37:01 0:25:19 0:12:17 0:13:02 smithi main centos 9.stream crimson-rados/perf/{clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore settings/optimized workloads/radosbench_4M_seq_read} 1
Failure Reason:

"2023-10-31T12:34:51.952967+0000 mon.a (mon.0) 165 : cluster [WRN] Health check failed: Degraded data redundancy: 23/1410 objects degraded (1.631%), 9 pgs degraded (PG_DEGRADED)" in cluster log

pass 7441979 2023-10-31 11:41:07 2023-10-31 12:16:03 2023-10-31 12:55:28 0:39:25 0:28:30 0:10:55 smithi main centos 9.stream crimson-rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore thrashers/default thrashosds-health workloads/snaps-few-objects-balanced} 2
pass 7441980 2023-10-31 11:41:08 2023-10-31 12:16:03 2023-10-31 12:42:13 0:26:10 0:14:02 0:12:08 smithi main centos 9.stream crimson-rados/basic/{clusters/fixed-2 crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph tasks/readwrite} 2
pass 7441981 2023-10-31 11:41:09 2023-10-31 12:19:14 2023-10-31 12:52:01 0:32:47 0:22:20 0:10:27 smithi main centos 9.stream crimson-rados/rbd/{clusters/fixed-1 crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph tasks/rbd_python_api_tests} 1
fail 7441982 2023-10-31 11:41:09 2023-10-31 12:19:14 2023-10-31 12:42:19 0:23:05 0:11:36 0:11:29 smithi main centos 9.stream crimson-rados/perf/{clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore settings/optimized workloads/radosbench_4M_write} 1
Failure Reason:

"2023-10-31T12:40:43.670450+0000 mon.a (mon.0) 153 : cluster [WRN] Health check failed: Degraded data redundancy: 19/1672 objects degraded (1.136%), 4 pgs degraded (PG_DEGRADED)" in cluster log

pass 7441983 2023-10-31 11:41:10 2023-10-31 12:21:05 2023-10-31 12:58:51 0:37:46 0:28:02 0:09:44 smithi main centos 9.stream crimson-rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore thrashers/default thrashosds-health workloads/snaps-few-objects-localized} 2
pass 7441984 2023-10-31 11:41:11 2023-10-31 12:21:15 2023-10-31 12:52:37 0:31:22 0:22:03 0:09:19 smithi main centos 9.stream crimson-rados/perf/{clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore settings/optimized workloads/radosbench_omap_write} 1
pass 7441985 2023-10-31 11:41:12 2023-10-31 12:21:46 2023-10-31 13:00:44 0:38:58 0:27:26 0:11:32 smithi main centos 9.stream crimson-rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore thrashers/default thrashosds-health workloads/snaps-few-objects} 2
fail 7441986 2023-10-31 11:41:13 2023-10-31 12:23:46 2023-10-31 15:45:06 3:21:20 3:10:53 0:10:27 smithi main centos 9.stream crimson-rados/rbd/{clusters/fixed-1 crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph tasks/rbd_python_api_tests_old_format} 1
Failure Reason:

Command failed (workunit test rbd/test_librbd_python.sh) on smithi084 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=21548fe806cf259deac1421530d5ce720be17997 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 CRIMSON_COMPAT=1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rbd/test_librbd_python.sh -m \'not skip_if_crimson\''

dead 7441987 2023-10-31 11:41:13 2023-11-01 00:32:09 smithi main centos 9.stream crimson-rados/perf/{clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore settings/optimized workloads/sample_fio} 1
Failure Reason:

hit max job timeout

pass 7441988 2023-10-31 11:41:14 2023-10-31 12:23:57 2023-10-31 12:50:07 0:26:10 0:14:36 0:11:34 smithi main centos 9.stream crimson-rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore thrashers/default thrashosds-health workloads/write_fadvise_dontneed} 2
fail 7441989 2023-10-31 11:41:15 2023-10-31 12:27:08 2023-10-31 12:51:18 0:24:10 0:11:44 0:12:26 smithi main centos 9.stream crimson-rados/perf/{clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore settings/optimized workloads/sample_radosbench} 1
Failure Reason:

"2023-10-31T12:49:48.016137+0000 mon.a (mon.0) 153 : cluster [WRN] Health check failed: Degraded data redundancy: 42/8962 objects degraded (0.469%), 2 pgs degraded (PG_DEGRADED)" in cluster log