User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Pass | Fail | Dead |
---|---|---|---|---|---|---|---|---|---|---|---|
yingxin | 2023-10-12 01:35:43 | 2023-10-12 01:54:32 | 2023-10-12 14:17:19 | 12:22:47 | crimson-rados | wip-yingxin-crimson-osd-crosscore-pg-submission | smithi | 9587ffd | 9 | 18 | 7 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
fail | 7422367 | 2023-10-12 01:35:55 | 2023-10-12 01:54:32 | 2023-10-12 05:18:28 | 3:23:56 | 3:13:14 | 0:10:42 | smithi | main | centos | 9.stream | crimson-rados/basic/{clusters/fixed-2 crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph tasks/rados_api_tests} | 2 | |
Failure Reason:
Command failed (workunit test rados/test.sh) on smithi125 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=9587ffdfc8c52bcf06abbe89d2f527c434a3e9f5 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 CRIMSON_COMPAT=1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test.sh' |
||||||||||||||
pass | 7422368 | 2023-10-12 01:35:56 | 2023-10-12 01:54:32 | 2023-10-12 02:19:24 | 0:24:52 | 0:16:42 | 0:08:10 | smithi | main | centos | 9.stream | crimson-rados/perf/{clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore settings/optimized workloads/fio_4K_rand_read} | 1 | |
fail | 7422369 | 2023-10-12 01:35:56 | 2023-10-12 01:54:32 | 2023-10-12 05:15:49 | 3:21:17 | 3:12:32 | 0:08:45 | smithi | main | centos | 9.stream | crimson-rados/rbd/{clusters/fixed-1 crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph tasks/rbd_api_tests} | 1 | |
Failure Reason:
Command failed (workunit test rbd/crimson/test_crimson_librbd.sh) on smithi086 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=9587ffdfc8c52bcf06abbe89d2f527c434a3e9f5 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 RBD_FEATURES=61 CRIMSON_COMPAT=1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rbd/crimson/test_crimson_librbd.sh' |
||||||||||||||
pass | 7422370 | 2023-10-12 01:35:57 | 2023-10-12 01:54:33 | 2023-10-12 02:20:57 | 0:26:24 | 0:15:27 | 0:10:57 | smithi | main | centos | 9.stream | crimson-rados/singleton/{all/osd-backfill crimson-supported-all-distro/centos_latest crimson_qa_overrides objectstore/bluestore rados} | 1 | |
pass | 7422371 | 2023-10-12 01:35:58 | 2023-10-12 01:55:43 | 2023-10-12 02:21:29 | 0:25:46 | 0:16:34 | 0:09:12 | smithi | main | centos | 9.stream | crimson-rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore thrashers/default thrashosds-health workloads/admin_socket_objecter_requests} | 2 | |
pass | 7422372 | 2023-10-12 01:35:58 | 2023-10-12 01:56:24 | 2023-10-12 02:20:59 | 0:24:35 | 0:16:48 | 0:07:47 | smithi | main | centos | 9.stream | crimson-rados/perf/{clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore settings/optimized workloads/fio_4K_rand_rw} | 1 | |
dead | 7422373 | 2023-10-12 01:35:59 | 2023-10-12 01:56:24 | 2023-10-12 14:04:45 | 12:08:21 | smithi | main | centos | 9.stream | crimson-rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore thrashers/default thrashosds-health workloads/pool-snaps-few-objects} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 7422374 | 2023-10-12 01:35:59 | 2023-10-12 01:56:34 | 2023-10-12 02:21:46 | 0:25:12 | 0:16:02 | 0:09:10 | smithi | main | centos | 9.stream | crimson-rados/perf/{clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore settings/optimized workloads/fio_4M_rand_read} | 1 | |
fail | 7422375 | 2023-10-12 01:36:00 | 2023-10-12 01:56:45 | 2023-10-12 05:20:25 | 3:23:40 | 3:13:12 | 0:10:28 | smithi | main | centos | 9.stream | crimson-rados/rbd/{clusters/fixed-1 crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph tasks/rbd_api_tests_old_format} | 1 | |
Failure Reason:
Command failed (workunit test rbd/crimson/test_crimson_librbd.sh) on smithi072 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=9587ffdfc8c52bcf06abbe89d2f527c434a3e9f5 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 CRIMSON_COMPAT=1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rbd/crimson/test_crimson_librbd.sh' |
||||||||||||||
pass | 7422376 | 2023-10-12 01:36:00 | 2023-10-12 01:56:45 | 2023-10-12 02:32:15 | 0:35:30 | 0:26:08 | 0:09:22 | smithi | main | centos | 9.stream | crimson-rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore thrashers/default thrashosds-health workloads/radosbench-high-concurrency} | 2 | |
fail | 7422377 | 2023-10-12 01:36:01 | 2023-10-12 01:56:55 | 2023-10-12 02:22:06 | 0:25:11 | 0:16:41 | 0:08:30 | smithi | main | centos | 9.stream | crimson-rados/perf/{clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore settings/optimized workloads/fio_4M_rand_rw} | 1 | |
Failure Reason:
"2023-10-12T02:16:52.859401+0000 mon.a (mon.0) 130 : cluster [WRN] Health check failed: Reduced data availability: 2 pgs inactive (PG_AVAILABILITY)" in cluster log |
||||||||||||||
fail | 7422378 | 2023-10-12 01:36:02 | 2023-10-12 01:57:16 | 2023-10-12 03:18:41 | 1:21:25 | 1:12:32 | 0:08:53 | smithi | main | centos | 9.stream | crimson-rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore thrashers/default thrashosds-health workloads/radosbench} | 2 | |
Failure Reason:
reached maximum tries (501) after waiting for 3000 seconds |
||||||||||||||
pass | 7422379 | 2023-10-12 01:36:02 | 2023-10-12 01:57:36 | 2023-10-12 02:25:32 | 0:27:56 | 0:16:47 | 0:11:09 | smithi | main | centos | 9.stream | crimson-rados/perf/{clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore settings/optimized workloads/fio_4M_rand_write} | 1 | |
fail | 7422380 | 2023-10-12 01:36:03 | 2023-10-12 01:57:57 | 2023-10-12 03:22:08 | 1:24:11 | 1:12:24 | 0:11:47 | smithi | main | centos | 9.stream | crimson-rados/basic/{clusters/fixed-2 crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph tasks/rados_python} | 2 | |
Failure Reason:
Command failed (workunit test rados/test_python.sh) on smithi007 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=9587ffdfc8c52bcf06abbe89d2f527c434a3e9f5 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 CRIMSON_COMPAT=1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 1h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test_python.sh -m \'not (wait or tier or ec or bench or stats)\'' |
||||||||||||||
fail | 7422381 | 2023-10-12 01:36:03 | 2023-10-12 01:58:57 | 2023-10-12 05:20:11 | 3:21:14 | 3:12:46 | 0:08:28 | smithi | main | centos | 9.stream | crimson-rados/rbd/{clusters/fixed-1 crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph tasks/rbd_cls_tests} | 1 | |
Failure Reason:
Command failed (workunit test cls/test_cls_rbd.sh) on smithi194 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=9587ffdfc8c52bcf06abbe89d2f527c434a3e9f5 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 CRIMSON_COMPAT=1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_rbd.sh' |
||||||||||||||
dead | 7422382 | 2023-10-12 01:36:04 | 2023-10-12 01:58:57 | 2023-10-12 14:07:23 | 12:08:26 | smithi | main | centos | 9.stream | crimson-rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore thrashers/default thrashosds-health workloads/small-objects-balanced} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 7422383 | 2023-10-12 01:36:04 | 2023-10-12 01:59:48 | 2023-10-12 02:22:47 | 0:22:59 | 0:13:59 | 0:09:00 | smithi | main | centos | 9.stream | crimson-rados/perf/{clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore settings/optimized workloads/radosbench_4K_rand_read} | 1 | |
Failure Reason:
"2023-10-12T02:19:45.877720+0000 mon.a (mon.0) 155 : cluster [WRN] Health check failed: Degraded data redundancy: 118/9380 objects degraded (1.258%), 7 pgs degraded (PG_DEGRADED)" in cluster log |
||||||||||||||
dead | 7422384 | 2023-10-12 01:36:05 | 2023-10-12 01:59:48 | 2023-10-12 14:08:39 | 12:08:51 | smithi | main | centos | 9.stream | crimson-rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore thrashers/default thrashosds-health workloads/small-objects-localized} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 7422385 | 2023-10-12 01:36:06 | 2023-10-12 02:00:49 | 2023-10-12 02:21:54 | 0:21:05 | 0:12:06 | 0:08:59 | smithi | main | centos | 9.stream | crimson-rados/perf/{clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore settings/optimized workloads/radosbench_4K_seq_read} | 1 | |
Failure Reason:
"2023-10-12T02:20:12.766767+0000 mon.a (mon.0) 152 : cluster [WRN] Health check failed: Degraded data redundancy: 130/10930 objects degraded (1.189%), 6 pgs degraded (PG_DEGRADED)" in cluster log |
||||||||||||||
fail | 7422386 | 2023-10-12 01:36:06 | 2023-10-12 02:00:49 | 2023-10-12 05:23:07 | 3:22:18 | 3:12:31 | 0:09:47 | smithi | main | centos | 9.stream | crimson-rados/rbd/{clusters/fixed-1 crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph tasks/rbd_lock_and_fence} | 1 | |
Failure Reason:
Command failed (workunit test rbd/test_lock_fence.sh) on smithi078 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=9587ffdfc8c52bcf06abbe89d2f527c434a3e9f5 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 CRIMSON_COMPAT=1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rbd/test_lock_fence.sh' |
||||||||||||||
fail | 7422387 | 2023-10-12 01:36:07 | 2023-10-12 02:01:49 | 2023-10-12 02:24:55 | 0:23:06 | 0:13:21 | 0:09:45 | smithi | main | centos | 9.stream | crimson-rados/perf/{clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore settings/optimized workloads/radosbench_4M_rand_read} | 1 | |
Failure Reason:
"2023-10-12T02:21:12.805443+0000 mon.a (mon.0) 151 : cluster [WRN] Health check failed: Degraded data redundancy: 26/1672 objects degraded (1.555%), 10 pgs degraded (PG_DEGRADED)" in cluster log |
||||||||||||||
dead | 7422388 | 2023-10-12 01:36:07 | 2023-10-12 02:02:20 | 2023-10-12 14:13:37 | 12:11:17 | smithi | main | centos | 9.stream | crimson-rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore thrashers/default thrashosds-health workloads/small-objects} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 7422389 | 2023-10-12 01:36:08 | 2023-10-12 02:05:01 | 2023-10-12 02:25:51 | 0:20:50 | 0:12:18 | 0:08:32 | smithi | main | centos | 9.stream | crimson-rados/perf/{clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore settings/optimized workloads/radosbench_4M_seq_read} | 1 | |
Failure Reason:
"2023-10-12T02:24:23.824821+0000 mon.a (mon.0) 153 : cluster [WRN] Health check failed: Degraded data redundancy: 25/1730 objects degraded (1.445%), 7 pgs degraded (PG_DEGRADED)" in cluster log |
||||||||||||||
dead | 7422390 | 2023-10-12 01:36:08 | 2023-10-12 02:05:01 | 2023-10-12 14:13:43 | 12:08:42 | smithi | main | centos | 9.stream | crimson-rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore thrashers/default thrashosds-health workloads/snaps-few-objects-balanced} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 7422391 | 2023-10-12 01:36:09 | 2023-10-12 02:05:01 | 2023-10-12 02:32:29 | 0:27:28 | 0:17:49 | 0:09:39 | smithi | main | centos | 9.stream | crimson-rados/basic/{clusters/fixed-2 crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph tasks/readwrite} | 2 | |
Failure Reason:
Command crashed: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --max-ops 4000 --objects 500 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op read 45 --op write 22 --op delete 10 --op write_excl 22 --pool unique_pool_0' |
||||||||||||||
fail | 7422392 | 2023-10-12 01:36:10 | 2023-10-12 02:05:12 | 2023-10-12 05:26:07 | 3:20:55 | 3:11:27 | 0:09:28 | smithi | main | centos | 9.stream | crimson-rados/rbd/{clusters/fixed-1 crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph tasks/rbd_python_api_tests} | 1 | |
Failure Reason:
Command failed (workunit test rbd/test_librbd_python.sh) on smithi033 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=9587ffdfc8c52bcf06abbe89d2f527c434a3e9f5 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 RBD_FEATURES=61 CRIMSON_COMPAT=1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rbd/test_librbd_python.sh -m \'not skip_if_crimson\'' |
||||||||||||||
fail | 7422393 | 2023-10-12 01:36:10 | 2023-10-12 02:05:22 | 2023-10-12 02:26:03 | 0:20:41 | 0:12:24 | 0:08:17 | smithi | main | centos | 9.stream | crimson-rados/perf/{clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore settings/optimized workloads/radosbench_4M_write} | 1 | |
Failure Reason:
"2023-10-12T02:24:42.873183+0000 mon.a (mon.0) 155 : cluster [WRN] Health check failed: Degraded data redundancy: 19/1546 objects degraded (1.229%), 9 pgs degraded (PG_DEGRADED)" in cluster log |
||||||||||||||
dead | 7422394 | 2023-10-12 01:36:11 | 2023-10-12 02:05:22 | 2023-10-12 14:15:08 | 12:09:46 | smithi | main | centos | 9.stream | crimson-rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore thrashers/default thrashosds-health workloads/snaps-few-objects-localized} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 7422395 | 2023-10-12 01:36:11 | 2023-10-12 02:06:03 | 2023-10-12 02:37:23 | 0:31:20 | 0:22:29 | 0:08:51 | smithi | main | centos | 9.stream | crimson-rados/perf/{clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore settings/optimized workloads/radosbench_omap_write} | 1 | |
dead | 7422396 | 2023-10-12 01:36:12 | 2023-10-12 02:06:23 | 2023-10-12 14:17:19 | 12:10:56 | smithi | main | centos | 9.stream | crimson-rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore thrashers/default thrashosds-health workloads/snaps-few-objects} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 7422397 | 2023-10-12 01:36:12 | 2023-10-12 02:08:14 | 2023-10-12 05:29:20 | 3:21:06 | 3:11:23 | 0:09:43 | smithi | main | centos | 9.stream | crimson-rados/rbd/{clusters/fixed-1 crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph tasks/rbd_python_api_tests_old_format} | 1 | |
Failure Reason:
Command failed (workunit test rbd/test_librbd_python.sh) on smithi022 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=9587ffdfc8c52bcf06abbe89d2f527c434a3e9f5 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 CRIMSON_COMPAT=1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rbd/test_librbd_python.sh -m \'not skip_if_crimson\'' |
||||||||||||||
pass | 7422398 | 2023-10-12 01:36:13 | 2023-10-12 02:08:24 | 2023-10-12 02:33:16 | 0:24:52 | 0:15:34 | 0:09:18 | smithi | main | centos | 9.stream | crimson-rados/perf/{clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore settings/optimized workloads/sample_fio} | 1 | |
fail | 7422399 | 2023-10-12 01:36:14 | 2023-10-12 02:08:25 | 2023-10-12 02:41:51 | 0:33:26 | 0:22:12 | 0:11:14 | smithi | main | centos | 9.stream | crimson-rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore thrashers/default thrashosds-health workloads/write_fadvise_dontneed} | 2 | |
Failure Reason:
Command crashed: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --write-fadvise-dontneed --max-ops 4000 --objects 500 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op read 100 --op write 50 --op delete 10 --op write_excl 50 --pool unique_pool_0' |
||||||||||||||
fail | 7422400 | 2023-10-12 01:36:14 | 2023-10-12 02:10:45 | 2023-10-12 02:32:08 | 0:21:23 | 0:11:24 | 0:09:59 | smithi | main | centos | 9.stream | crimson-rados/perf/{clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore settings/optimized workloads/sample_radosbench} | 1 | |
Failure Reason:
"2023-10-12T02:30:09.008465+0000 mon.a (mon.0) 149 : cluster [WRN] Health check failed: Degraded data redundancy: 144/9816 objects degraded (1.467%), 8 pgs degraded (PG_DEGRADED)" in cluster log |