User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Pass | Fail | Dead |
---|---|---|---|---|---|---|---|---|---|---|---|
matan | 2023-10-31 22:58:07 | 2023-10-31 22:59:14 | 2023-11-01 12:54:46 | 13:55:32 | crimson-rados | wip-matanb-crimson-do_osd_ops_execute-part3 | smithi | d1dcabb | 21 | 33 | 14 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
fail | 7442519 | 2023-10-31 22:58:21 | 2023-10-31 22:59:14 | 2023-11-01 02:24:22 | 3:25:08 | 3:12:08 | 0:13:00 | smithi | main | centos | 9.stream | crimson-rados/basic/{clusters/fixed-2 crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph tasks/rados_api_tests} | 2 | |
Failure Reason:
Command failed (workunit test rados/test.sh) on smithi008 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=a9edd23b5a8ddacab225824948b06b370baf276b TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 CRIMSON_COMPAT=1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test.sh' |
||||||||||||||
fail | 7442520 | 2023-10-31 22:58:21 | 2023-10-31 23:00:54 | 2023-11-01 02:24:05 | 3:23:11 | 3:11:47 | 0:11:24 | smithi | main | centos | 9.stream | crimson-rados/basic/{clusters/fixed-2 crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph tasks/rados_api_tests} | 2 | |
Failure Reason:
Command failed (workunit test rados/test.sh) on smithi125 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=a9edd23b5a8ddacab225824948b06b370baf276b TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 CRIMSON_COMPAT=1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test.sh' |
||||||||||||||
pass | 7442521 | 2023-10-31 22:58:22 | 2023-10-31 23:02:45 | 2023-10-31 23:29:15 | 0:26:30 | 0:16:01 | 0:10:29 | smithi | main | centos | 9.stream | crimson-rados/perf/{clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore settings/optimized workloads/fio_4K_rand_read} | 1 | |
pass | 7442522 | 2023-10-31 22:58:22 | 2023-10-31 23:04:06 | 2023-10-31 23:29:01 | 0:24:55 | 0:15:29 | 0:09:26 | smithi | main | centos | 9.stream | crimson-rados/perf/{clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore settings/optimized workloads/fio_4K_rand_read} | 1 | |
fail | 7442523 | 2023-10-31 22:58:22 | 2023-10-31 23:04:06 | 2023-11-01 02:29:53 | 3:25:47 | 3:11:56 | 0:13:51 | smithi | main | centos | 9.stream | crimson-rados/rbd/{clusters/fixed-1 crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph tasks/rbd_api_tests} | 1 | |
Failure Reason:
Command failed (workunit test rbd/crimson/test_crimson_librbd.sh) on smithi151 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=a9edd23b5a8ddacab225824948b06b370baf276b TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 RBD_FEATURES=61 CRIMSON_COMPAT=1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rbd/crimson/test_crimson_librbd.sh' |
||||||||||||||
fail | 7442524 | 2023-10-31 22:58:23 | 2023-10-31 23:08:17 | 2023-11-01 02:29:21 | 3:21:04 | 3:12:00 | 0:09:04 | smithi | main | centos | 9.stream | crimson-rados/rbd/{clusters/fixed-1 crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph tasks/rbd_api_tests} | 1 | |
Failure Reason:
Command failed (workunit test rbd/crimson/test_crimson_librbd.sh) on smithi003 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=a9edd23b5a8ddacab225824948b06b370baf276b TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 RBD_FEATURES=61 CRIMSON_COMPAT=1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rbd/crimson/test_crimson_librbd.sh' |
||||||||||||||
pass | 7442525 | 2023-10-31 22:58:23 | 2023-10-31 23:08:17 | 2023-10-31 23:29:02 | 0:20:45 | 0:11:29 | 0:09:16 | smithi | main | centos | 9.stream | crimson-rados/singleton/{all/osd-backfill crimson-supported-all-distro/centos_latest crimson_qa_overrides objectstore/bluestore rados} | 1 | |
pass | 7442526 | 2023-10-31 22:58:23 | 2023-10-31 23:08:17 | 2023-10-31 23:31:34 | 0:23:17 | 0:12:20 | 0:10:57 | smithi | main | centos | 9.stream | crimson-rados/singleton/{all/osd-backfill crimson-supported-all-distro/centos_latest crimson_qa_overrides objectstore/bluestore rados} | 1 | |
pass | 7442527 | 2023-10-31 22:58:24 | 2023-10-31 23:08:18 | 2023-10-31 23:34:33 | 0:26:15 | 0:15:48 | 0:10:27 | smithi | main | centos | 9.stream | crimson-rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore thrashers/default thrashosds-health workloads/admin_socket_objecter_requests} | 2 | |
pass | 7442528 | 2023-10-31 22:58:24 | 2023-10-31 23:09:38 | 2023-10-31 23:36:54 | 0:27:16 | 0:15:52 | 0:11:24 | smithi | main | centos | 9.stream | crimson-rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore thrashers/default thrashosds-health workloads/admin_socket_objecter_requests} | 2 | |
pass | 7442529 | 2023-10-31 22:58:25 | 2023-10-31 23:11:49 | 2023-10-31 23:36:30 | 0:24:41 | 0:16:09 | 0:08:32 | smithi | main | centos | 9.stream | crimson-rados/perf/{clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore settings/optimized workloads/fio_4K_rand_rw} | 1 | |
pass | 7442530 | 2023-10-31 22:58:25 | 2023-10-31 23:11:49 | 2023-10-31 23:37:01 | 0:25:12 | 0:15:59 | 0:09:13 | smithi | main | centos | 9.stream | crimson-rados/perf/{clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore settings/optimized workloads/fio_4K_rand_rw} | 1 | |
dead | 7442531 | 2023-10-31 22:58:26 | 2023-10-31 23:12:20 | 2023-11-01 11:20:42 | 12:08:22 | smithi | main | centos | 9.stream | crimson-rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore thrashers/default thrashosds-health workloads/pool-snaps-few-objects} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 7442532 | 2023-10-31 22:58:26 | 2023-10-31 23:12:20 | 2023-10-31 23:46:48 | 0:34:28 | 0:22:03 | 0:12:25 | smithi | main | centos | 9.stream | crimson-rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore thrashers/default thrashosds-health workloads/pool-snaps-few-objects} | 2 | |
Failure Reason:
Command crashed: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --pool-snaps --max-ops 4000 --objects 50 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op read 100 --op write 50 --op delete 50 --op snap_create 50 --op snap_remove 50 --op rollback 0 --op copy_from 0 --op write_excl 50 --pool unique_pool_0' |
||||||||||||||
fail | 7442533 | 2023-10-31 22:58:27 | 2023-10-31 23:13:41 | 2023-10-31 23:39:26 | 0:25:45 | 0:14:52 | 0:10:53 | smithi | main | centos | 9.stream | crimson-rados/perf/{clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore settings/optimized workloads/fio_4M_rand_read} | 1 | |
Failure Reason:
"2023-10-31T23:33:10.809183+0000 mon.a (mon.0) 132 : cluster [WRN] Health check failed: Reduced data availability: 11 pgs inactive (PG_AVAILABILITY)" in cluster log |
||||||||||||||
pass | 7442534 | 2023-10-31 22:58:27 | 2023-10-31 23:14:31 | 2023-10-31 23:37:30 | 0:22:59 | 0:14:49 | 0:08:10 | smithi | main | centos | 9.stream | crimson-rados/perf/{clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore settings/optimized workloads/fio_4M_rand_read} | 1 | |
fail | 7442535 | 2023-10-31 22:58:28 | 2023-10-31 23:14:31 | 2023-11-01 02:36:56 | 3:22:25 | 3:11:56 | 0:10:29 | smithi | main | centos | 9.stream | crimson-rados/rbd/{clusters/fixed-1 crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph tasks/rbd_api_tests_old_format} | 1 | |
Failure Reason:
Command failed (workunit test rbd/crimson/test_crimson_librbd.sh) on smithi172 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=a9edd23b5a8ddacab225824948b06b370baf276b TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 CRIMSON_COMPAT=1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rbd/crimson/test_crimson_librbd.sh' |
||||||||||||||
fail | 7442536 | 2023-10-31 22:58:28 | 2023-10-31 23:15:42 | 2023-11-01 02:37:26 | 3:21:44 | 3:11:42 | 0:10:02 | smithi | main | centos | 9.stream | crimson-rados/rbd/{clusters/fixed-1 crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph tasks/rbd_api_tests_old_format} | 1 | |
Failure Reason:
Command failed (workunit test rbd/crimson/test_crimson_librbd.sh) on smithi181 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=a9edd23b5a8ddacab225824948b06b370baf276b TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 CRIMSON_COMPAT=1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rbd/crimson/test_crimson_librbd.sh' |
||||||||||||||
pass | 7442537 | 2023-10-31 22:58:29 | 2023-10-31 23:16:12 | 2023-10-31 23:53:17 | 0:37:05 | 0:25:08 | 0:11:57 | smithi | main | centos | 9.stream | crimson-rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore thrashers/default thrashosds-health workloads/radosbench-high-concurrency} | 2 | |
pass | 7442538 | 2023-10-31 22:58:29 | 2023-10-31 23:18:03 | 2023-11-01 00:00:34 | 0:42:31 | 0:25:48 | 0:16:43 | smithi | main | centos | 9.stream | crimson-rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore thrashers/default thrashosds-health workloads/radosbench-high-concurrency} | 2 | |
pass | 7442539 | 2023-10-31 22:58:29 | 2023-10-31 23:19:23 | 2023-10-31 23:44:56 | 0:25:33 | 0:15:27 | 0:10:06 | smithi | main | centos | 9.stream | crimson-rados/perf/{clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore settings/optimized workloads/fio_4M_rand_rw} | 1 | |
pass | 7442540 | 2023-10-31 22:58:30 | 2023-10-31 23:19:24 | 2023-10-31 23:45:28 | 0:26:04 | 0:15:39 | 0:10:25 | smithi | main | centos | 9.stream | crimson-rados/perf/{clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore settings/optimized workloads/fio_4M_rand_rw} | 1 | |
pass | 7442541 | 2023-10-31 22:58:30 | 2023-10-31 23:20:14 | 2023-10-31 23:56:24 | 0:36:10 | 0:26:12 | 0:09:58 | smithi | main | centos | 9.stream | crimson-rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore thrashers/default thrashosds-health workloads/radosbench} | 2 | |
pass | 7442542 | 2023-10-31 22:58:30 | 2023-10-31 23:21:05 | 2023-10-31 23:59:57 | 0:38:52 | 0:26:44 | 0:12:08 | smithi | main | centos | 9.stream | crimson-rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore thrashers/default thrashosds-health workloads/radosbench} | 2 | |
pass | 7442543 | 2023-10-31 22:58:31 | 2023-10-31 23:22:15 | 2023-10-31 23:47:21 | 0:25:06 | 0:15:43 | 0:09:23 | smithi | main | centos | 9.stream | crimson-rados/perf/{clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore settings/optimized workloads/fio_4M_rand_write} | 1 | |
pass | 7442544 | 2023-10-31 22:58:31 | 2023-10-31 23:22:16 | 2023-10-31 23:49:16 | 0:27:00 | 0:15:45 | 0:11:15 | smithi | main | centos | 9.stream | crimson-rados/perf/{clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore settings/optimized workloads/fio_4M_rand_write} | 1 | |
fail | 7442545 | 2023-10-31 22:58:32 | 2023-10-31 23:22:16 | 2023-11-01 00:44:33 | 1:22:17 | 1:11:17 | 0:11:00 | smithi | main | centos | 9.stream | crimson-rados/basic/{clusters/fixed-2 crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph tasks/rados_python} | 2 | |
Failure Reason:
Command failed (workunit test rados/test_python.sh) on smithi112 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=a9edd23b5a8ddacab225824948b06b370baf276b TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 CRIMSON_COMPAT=1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 1h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test_python.sh -m \'not (wait or tier or ec or bench or stats)\'' |
||||||||||||||
fail | 7442546 | 2023-10-31 22:58:32 | 2023-10-31 23:23:26 | 2023-11-01 00:50:18 | 1:26:52 | 1:12:28 | 0:14:24 | smithi | main | centos | 9.stream | crimson-rados/basic/{clusters/fixed-2 crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph tasks/rados_python} | 2 | |
Failure Reason:
Command failed (workunit test rados/test_python.sh) on smithi045 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=a9edd23b5a8ddacab225824948b06b370baf276b TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 CRIMSON_COMPAT=1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 1h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test_python.sh -m \'not (wait or tier or ec or bench or stats)\'' |
||||||||||||||
fail | 7442547 | 2023-10-31 22:58:33 | 2023-10-31 23:27:07 | 2023-11-01 02:48:17 | 3:21:10 | 3:11:39 | 0:09:31 | smithi | main | centos | 9.stream | crimson-rados/rbd/{clusters/fixed-1 crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph tasks/rbd_cls_tests} | 1 | |
Failure Reason:
Command failed (workunit test cls/test_cls_rbd.sh) on smithi084 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=a9edd23b5a8ddacab225824948b06b370baf276b TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 CRIMSON_COMPAT=1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_rbd.sh' |
||||||||||||||
fail | 7442548 | 2023-10-31 22:58:33 | 2023-10-31 23:27:08 | 2023-11-01 02:50:12 | 3:23:04 | 3:11:54 | 0:11:10 | smithi | main | centos | 9.stream | crimson-rados/rbd/{clusters/fixed-1 crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph tasks/rbd_cls_tests} | 1 | |
Failure Reason:
Command failed (workunit test cls/test_cls_rbd.sh) on smithi059 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=a9edd23b5a8ddacab225824948b06b370baf276b TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 CRIMSON_COMPAT=1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_rbd.sh' |
||||||||||||||
dead | 7442549 | 2023-10-31 22:58:34 | 2023-10-31 23:29:08 | 2023-11-01 11:37:47 | 12:08:39 | smithi | main | centos | 9.stream | crimson-rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore thrashers/default thrashosds-health workloads/small-objects-balanced} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
dead | 7442550 | 2023-10-31 22:58:34 | 2023-10-31 23:29:19 | 2023-11-01 11:41:29 | 12:12:10 | smithi | main | centos | 9.stream | crimson-rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore thrashers/default thrashosds-health workloads/small-objects-balanced} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 7442551 | 2023-10-31 22:58:35 | 2023-10-31 23:32:59 | 2023-10-31 23:55:55 | 0:22:56 | 0:12:51 | 0:10:05 | smithi | main | centos | 9.stream | crimson-rados/perf/{clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore settings/optimized workloads/radosbench_4K_rand_read} | 1 | |
Failure Reason:
"2023-10-31T23:52:04.809631+0000 mon.a (mon.0) 149 : cluster [WRN] Health check failed: Degraded data redundancy: 21/10222 objects degraded (0.205%), 1 pg degraded (PG_DEGRADED)" in cluster log |
||||||||||||||
fail | 7442552 | 2023-10-31 22:58:35 | 2023-10-31 23:33:00 | 2023-10-31 23:56:16 | 0:23:16 | 0:13:52 | 0:09:24 | smithi | main | centos | 9.stream | crimson-rados/perf/{clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore settings/optimized workloads/radosbench_4K_rand_read} | 1 | |
Failure Reason:
"2023-10-31T23:52:14.891892+0000 mon.a (mon.0) 117 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)" in cluster log |
||||||||||||||
dead | 7442553 | 2023-10-31 22:58:35 | 2023-10-31 23:33:00 | 2023-11-01 11:42:50 | 12:09:50 | smithi | main | centos | 9.stream | crimson-rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore thrashers/default thrashosds-health workloads/small-objects-localized} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
dead | 7442554 | 2023-10-31 22:58:36 | 2023-10-31 23:34:41 | 2023-11-01 11:45:23 | 12:10:42 | smithi | main | centos | 9.stream | crimson-rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore thrashers/default thrashosds-health workloads/small-objects-localized} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 7442555 | 2023-10-31 22:58:36 | 2023-10-31 23:37:01 | 2023-10-31 23:58:06 | 0:21:05 | 0:12:00 | 0:09:05 | smithi | main | centos | 9.stream | crimson-rados/perf/{clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore settings/optimized workloads/radosbench_4K_seq_read} | 1 | |
Failure Reason:
"2023-10-31T23:56:14.607968+0000 mon.a (mon.0) 151 : cluster [WRN] Health check failed: Degraded data redundancy: 157/9678 objects degraded (1.622%), 8 pgs degraded (PG_DEGRADED)" in cluster log |
||||||||||||||
fail | 7442556 | 2023-10-31 22:58:36 | 2023-10-31 23:37:02 | 2023-10-31 23:57:43 | 0:20:41 | 0:12:22 | 0:08:19 | smithi | main | centos | 9.stream | crimson-rados/perf/{clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore settings/optimized workloads/radosbench_4K_seq_read} | 1 | |
Failure Reason:
"2023-10-31T23:56:10.960461+0000 mon.a (mon.0) 149 : cluster [WRN] Health check failed: Degraded data redundancy: 139/10276 objects degraded (1.353%), 8 pgs degraded (PG_DEGRADED)" in cluster log |
||||||||||||||
fail | 7442557 | 2023-10-31 22:58:37 | 2023-10-31 23:37:02 | 2023-11-01 02:58:45 | 3:21:43 | 3:11:46 | 0:09:57 | smithi | main | centos | 9.stream | crimson-rados/rbd/{clusters/fixed-1 crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph tasks/rbd_lock_and_fence} | 1 | |
Failure Reason:
Command failed (workunit test rbd/test_lock_fence.sh) on smithi195 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=a9edd23b5a8ddacab225824948b06b370baf276b TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 CRIMSON_COMPAT=1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rbd/test_lock_fence.sh' |
||||||||||||||
fail | 7442558 | 2023-10-31 22:58:37 | 2023-10-31 23:37:33 | 2023-11-01 03:00:24 | 3:22:51 | 3:11:38 | 0:11:13 | smithi | main | centos | 9.stream | crimson-rados/rbd/{clusters/fixed-1 crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph tasks/rbd_lock_and_fence} | 1 | |
Failure Reason:
Command failed (workunit test rbd/test_lock_fence.sh) on smithi123 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=a9edd23b5a8ddacab225824948b06b370baf276b TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 CRIMSON_COMPAT=1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rbd/test_lock_fence.sh' |
||||||||||||||
fail | 7442559 | 2023-10-31 22:58:38 | 2023-10-31 23:39:33 | 2023-11-01 00:03:25 | 0:23:52 | 0:12:52 | 0:11:00 | smithi | main | centos | 9.stream | crimson-rados/perf/{clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore settings/optimized workloads/radosbench_4M_rand_read} | 1 | |
Failure Reason:
"2023-11-01T00:00:15.258388+0000 mon.a (mon.0) 151 : cluster [WRN] Health check failed: Degraded data redundancy: 47/3466 objects degraded (1.356%), 3 pgs degraded (PG_DEGRADED)" in cluster log |
||||||||||||||
fail | 7442560 | 2023-10-31 22:58:38 | 2023-10-31 23:40:24 | 2023-11-01 00:04:11 | 0:23:47 | 0:12:54 | 0:10:53 | smithi | main | centos | 9.stream | crimson-rados/perf/{clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore settings/optimized workloads/radosbench_4M_rand_read} | 1 | |
Failure Reason:
"2023-11-01T00:00:47.703355+0000 mon.a (mon.0) 153 : cluster [WRN] Health check failed: Degraded data redundancy: 22/1566 objects degraded (1.405%), 9 pgs degraded (PG_DEGRADED)" in cluster log |
||||||||||||||
dead | 7442561 | 2023-10-31 22:58:39 | 2023-10-31 23:40:24 | 2023-11-01 11:52:51 | 12:12:27 | smithi | main | centos | 9.stream | crimson-rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore thrashers/default thrashosds-health workloads/small-objects} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
dead | 7442562 | 2023-10-31 22:58:39 | 2023-10-31 23:44:25 | 2023-11-01 11:53:53 | 12:09:28 | smithi | main | centos | 9.stream | crimson-rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore thrashers/default thrashosds-health workloads/small-objects} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 7442563 | 2023-10-31 22:58:40 | 2023-10-31 23:45:35 | 2023-11-01 00:07:35 | 0:22:00 | 0:12:10 | 0:09:50 | smithi | main | centos | 9.stream | crimson-rados/perf/{clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore settings/optimized workloads/radosbench_4M_seq_read} | 1 | |
Failure Reason:
"2023-11-01T00:05:43.669811+0000 mon.a (mon.0) 148 : cluster [WRN] Health check failed: Degraded data redundancy: 30/1698 objects degraded (1.767%), 10 pgs degraded (PG_DEGRADED)" in cluster log |
||||||||||||||
fail | 7442564 | 2023-10-31 22:58:40 | 2023-10-31 23:46:56 | 2023-11-01 00:09:51 | 0:22:55 | 0:12:35 | 0:10:20 | smithi | main | centos | 9.stream | crimson-rados/perf/{clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore settings/optimized workloads/radosbench_4M_seq_read} | 1 | |
Failure Reason:
"2023-11-01T00:06:54.611608+0000 mon.a (mon.0) 151 : cluster [WRN] Health check failed: Degraded data redundancy: 23/1670 objects degraded (1.377%), 8 pgs degraded (PG_DEGRADED)" in cluster log |
||||||||||||||
dead | 7442565 | 2023-10-31 22:58:41 | 2023-10-31 23:46:56 | 2023-11-01 11:57:39 | 12:10:43 | smithi | main | centos | 9.stream | crimson-rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore thrashers/default thrashosds-health workloads/snaps-few-objects-balanced} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
dead | 7442566 | 2023-10-31 22:58:41 | 2023-10-31 23:49:17 | 2023-11-01 12:03:44 | 12:14:27 | smithi | main | centos | 9.stream | crimson-rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore thrashers/default thrashosds-health workloads/snaps-few-objects-balanced} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
dead | 7442567 | 2023-10-31 22:58:42 | 2023-10-31 23:53:17 | 2023-11-01 12:06:21 | 12:13:04 | smithi | main | centos | 9.stream | crimson-rados/basic/{clusters/fixed-2 crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph tasks/readwrite} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
dead | 7442568 | 2023-10-31 22:58:42 | 2023-10-31 23:56:18 | 2023-11-01 12:04:50 | 12:08:32 | smithi | main | centos | 9.stream | crimson-rados/basic/{clusters/fixed-2 crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph tasks/readwrite} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 7442569 | 2023-10-31 22:58:42 | 2023-10-31 23:56:29 | 2023-11-01 03:18:35 | 3:22:06 | 3:11:44 | 0:10:22 | smithi | main | centos | 9.stream | crimson-rados/rbd/{clusters/fixed-1 crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph tasks/rbd_python_api_tests} | 1 | |
Failure Reason:
Command failed (workunit test rbd/test_librbd_python.sh) on smithi019 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=a9edd23b5a8ddacab225824948b06b370baf276b TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 RBD_FEATURES=61 CRIMSON_COMPAT=1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rbd/test_librbd_python.sh -m \'not skip_if_crimson\'' |
||||||||||||||
fail | 7442570 | 2023-10-31 22:58:43 | 2023-10-31 23:57:49 | 2023-11-01 03:19:05 | 3:21:16 | 3:11:14 | 0:10:02 | smithi | main | centos | 9.stream | crimson-rados/rbd/{clusters/fixed-1 crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph tasks/rbd_python_api_tests} | 1 | |
Failure Reason:
Command failed (workunit test rbd/test_librbd_python.sh) on smithi124 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=a9edd23b5a8ddacab225824948b06b370baf276b TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 RBD_FEATURES=61 CRIMSON_COMPAT=1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rbd/test_librbd_python.sh -m \'not skip_if_crimson\'' |
||||||||||||||
fail | 7442571 | 2023-10-31 22:58:43 | 2023-10-31 23:58:09 | 2023-11-01 00:21:03 | 0:22:54 | 0:11:25 | 0:11:29 | smithi | main | centos | 9.stream | crimson-rados/perf/{clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore settings/optimized workloads/radosbench_4M_write} | 1 | |
Failure Reason:
"2023-11-01T00:18:56.755926+0000 mon.a (mon.0) 149 : cluster [WRN] Health check failed: Degraded data redundancy: 30/1848 objects degraded (1.623%), 9 pgs degraded (PG_DEGRADED)" in cluster log |
||||||||||||||
fail | 7442572 | 2023-10-31 22:58:43 | 2023-11-01 00:00:00 | 2023-11-01 00:21:42 | 0:21:42 | 0:12:14 | 0:09:28 | smithi | main | centos | 9.stream | crimson-rados/perf/{clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore settings/optimized workloads/radosbench_4M_write} | 1 | |
Failure Reason:
"2023-11-01T00:20:32.740993+0000 mon.a (mon.0) 155 : cluster [WRN] Health check failed: Degraded data redundancy: 11/1808 objects degraded (0.608%), 2 pgs degraded (PG_DEGRADED)" in cluster log |
||||||||||||||
dead | 7442573 | 2023-10-31 22:58:44 | 2023-11-01 00:00:00 | 2023-11-01 12:08:39 | 12:08:39 | smithi | main | centos | 9.stream | crimson-rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore thrashers/default thrashosds-health workloads/snaps-few-objects-localized} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
dead | 7442574 | 2023-10-31 22:58:44 | 2023-11-01 00:00:41 | 2023-11-01 12:10:52 | 12:10:11 | smithi | main | centos | 9.stream | crimson-rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore thrashers/default thrashosds-health workloads/snaps-few-objects-localized} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 7442575 | 2023-10-31 22:58:45 | 2023-11-01 00:03:36 | 2023-11-01 00:37:51 | 0:34:15 | 0:22:52 | 0:11:23 | smithi | main | centos | 9.stream | crimson-rados/perf/{clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore settings/optimized workloads/radosbench_omap_write} | 1 | |
pass | 7442576 | 2023-10-31 22:58:45 | 2023-11-01 00:04:16 | 2023-11-01 00:38:28 | 0:34:12 | 0:22:26 | 0:11:46 | smithi | main | centos | 9.stream | crimson-rados/perf/{clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore settings/optimized workloads/radosbench_omap_write} | 1 | |
fail | 7442577 | 2023-10-31 22:58:46 | 2023-11-01 00:07:37 | 2023-11-01 00:52:19 | 0:44:42 | 0:21:30 | 0:23:12 | smithi | main | centos | 9.stream | crimson-rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore thrashers/default thrashosds-health workloads/snaps-few-objects} | 2 | |
Failure Reason:
Command crashed: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --max-ops 4000 --objects 50 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op read 100 --op write 50 --op delete 50 --op snap_create 50 --op snap_remove 50 --op rollback 0 --op copy_from 0 --op write_excl 50 --pool unique_pool_0' |
||||||||||||||
fail | 7442578 | 2023-10-31 22:58:46 | 2023-11-01 00:21:09 | 2023-11-01 00:55:11 | 0:34:02 | 0:22:20 | 0:11:42 | smithi | main | centos | 9.stream | crimson-rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore thrashers/default thrashosds-health workloads/snaps-few-objects} | 2 | |
Failure Reason:
Command crashed: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --max-ops 4000 --objects 50 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op read 100 --op write 50 --op delete 50 --op snap_create 50 --op snap_remove 50 --op rollback 0 --op copy_from 0 --op write_excl 50 --pool unique_pool_0' |
||||||||||||||
fail | 7442579 | 2023-10-31 22:58:47 | 2023-11-01 00:21:49 | 2023-11-01 03:51:04 | 3:29:15 | 3:10:43 | 0:18:32 | smithi | main | centos | 9.stream | crimson-rados/rbd/{clusters/fixed-1 crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph tasks/rbd_python_api_tests_old_format} | 1 | |
Failure Reason:
Command failed (workunit test rbd/test_librbd_python.sh) on smithi079 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=a9edd23b5a8ddacab225824948b06b370baf276b TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 CRIMSON_COMPAT=1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rbd/test_librbd_python.sh -m \'not skip_if_crimson\'' |
||||||||||||||
fail | 7442580 | 2023-10-31 22:58:47 | 2023-11-01 00:30:11 | 2023-11-01 03:59:58 | 3:29:47 | 3:10:48 | 0:18:59 | smithi | main | centos | 9.stream | crimson-rados/rbd/{clusters/fixed-1 crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph tasks/rbd_python_api_tests_old_format} | 1 | |
Failure Reason:
Command failed (workunit test rbd/test_librbd_python.sh) on smithi163 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=a9edd23b5a8ddacab225824948b06b370baf276b TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 CRIMSON_COMPAT=1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rbd/test_librbd_python.sh -m \'not skip_if_crimson\'' |
||||||||||||||
pass | 7442581 | 2023-10-31 22:58:48 | 2023-11-01 00:37:52 | 2023-11-01 01:03:31 | 0:25:39 | 0:15:26 | 0:10:13 | smithi | main | centos | 9.stream | crimson-rados/perf/{clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore settings/optimized workloads/sample_fio} | 1 | |
pass | 7442582 | 2023-10-31 22:58:48 | 2023-11-01 00:38:33 | 2023-11-01 01:09:24 | 0:30:51 | 0:15:27 | 0:15:24 | smithi | main | centos | 9.stream | crimson-rados/perf/{clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore settings/optimized workloads/sample_fio} | 1 | |
fail | 7442583 | 2023-10-31 22:58:49 | 2023-11-01 00:44:34 | 2023-11-01 01:19:34 | 0:35:00 | 0:23:01 | 0:11:59 | smithi | main | centos | 9.stream | crimson-rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore thrashers/default thrashosds-health workloads/write_fadvise_dontneed} | 2 | |
Failure Reason:
Command crashed: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --write-fadvise-dontneed --max-ops 4000 --objects 500 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op read 100 --op write 50 --op delete 10 --op write_excl 50 --pool unique_pool_0' |
||||||||||||||
dead | 7442584 | 2023-10-31 22:58:49 | 2023-11-01 00:46:34 | 2023-11-01 12:54:46 | 12:08:12 | smithi | main | centos | 9.stream | crimson-rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore thrashers/default thrashosds-health workloads/write_fadvise_dontneed} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 7442585 | 2023-10-31 22:58:49 | 2023-11-01 00:46:35 | 2023-11-01 01:11:06 | 0:24:31 | 0:11:54 | 0:12:37 | smithi | main | centos | 9.stream | crimson-rados/perf/{clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore settings/optimized workloads/sample_radosbench} | 1 | |
Failure Reason:
"2023-11-01T01:09:40.533323+0000 mon.a (mon.0) 149 : cluster [WRN] Health check failed: Degraded data redundancy: 129/9964 objects degraded (1.295%), 6 pgs degraded (PG_DEGRADED)" in cluster log |
||||||||||||||
fail | 7442586 | 2023-10-31 22:58:49 | 2023-11-01 00:50:25 | 2023-11-01 01:11:25 | 0:21:00 | 0:11:58 | 0:09:02 | smithi | main | centos | 9.stream | crimson-rados/perf/{clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore settings/optimized workloads/sample_radosbench} | 1 | |
Failure Reason:
"2023-11-01T01:10:00.111319+0000 mon.a (mon.0) 148 : cluster [WRN] Health check failed: Degraded data redundancy: 105/9620 objects degraded (1.091%), 7 pgs degraded (PG_DEGRADED)" in cluster log |