User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Pass | Fail | Dead |
---|---|---|---|---|---|---|---|---|---|---|---|
sjust | 2023-10-17 06:50:25 | 2023-10-17 06:52:05 | 2023-10-17 19:42:58 | 12:50:53 | crimson-rados | wip-crimson-scrub-testing-2023-10-16 | smithi | a83d484 | 10 | 12 | 3 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
pass | 7431065 | 2023-10-17 06:50:34 | 2023-10-17 06:52:05 | 2023-10-17 07:27:09 | 0:35:04 | 0:25:43 | 0:09:21 | smithi | main | centos | 9.stream | crimson-rados/basic/{clusters/fixed-2 crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph tasks/rados_api_tests} | 2 | |
pass | 7431066 | 2023-10-17 06:50:35 | 2023-10-17 06:52:05 | 2023-10-17 07:17:00 | 0:24:55 | 0:16:07 | 0:08:48 | smithi | main | centos | 9.stream | crimson-rados/perf/{clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore settings/optimized workloads/fio_4K_rand_read} | 1 | |
fail | 7431067 | 2023-10-17 06:50:36 | 2023-10-17 06:52:06 | 2023-10-17 10:17:00 | 3:24:54 | 3:12:15 | 0:12:39 | smithi | main | centos | 9.stream | crimson-rados/rbd/{clusters/fixed-1 crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph tasks/rbd_api_tests} | 1 | |
Failure Reason:
Command failed (workunit test rbd/crimson/test_crimson_librbd.sh) on smithi049 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=a83d484198747052d1ed9eeed0a72e20b007e411 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 RBD_FEATURES=61 CRIMSON_COMPAT=1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rbd/crimson/test_crimson_librbd.sh' |
||||||||||||||
pass | 7431068 | 2023-10-17 06:50:36 | 2023-10-17 06:52:06 | 2023-10-17 07:21:10 | 0:29:04 | 0:15:36 | 0:13:28 | smithi | main | centos | 9.stream | crimson-rados/singleton/{all/osd-backfill crimson-supported-all-distro/centos_latest crimson_qa_overrides objectstore/bluestore rados} | 1 | |
pass | 7431069 | 2023-10-17 06:50:37 | 2023-10-17 06:55:47 | 2023-10-17 07:24:06 | 0:28:19 | 0:16:50 | 0:11:29 | smithi | main | centos | 9.stream | crimson-rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore thrashers/default thrashosds-health workloads/admin_socket_objecter_requests} | 2 | |
pass | 7431070 | 2023-10-17 06:50:38 | 2023-10-17 06:56:27 | 2023-10-17 07:21:30 | 0:25:03 | 0:16:39 | 0:08:24 | smithi | main | centos | 9.stream | crimson-rados/perf/{clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore settings/optimized workloads/fio_4K_rand_rw} | 1 | |
dead | 7431071 | 2023-10-17 06:50:39 | 2023-10-17 06:56:27 | 2023-10-17 19:05:53 | 12:09:26 | smithi | main | centos | 9.stream | crimson-rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore thrashers/default thrashosds-health workloads/pool-snaps-few-objects} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 7431072 | 2023-10-17 06:50:39 | 2023-10-17 06:56:38 | 2023-10-17 07:23:25 | 0:26:47 | 0:15:48 | 0:10:59 | smithi | main | centos | 9.stream | crimson-rados/perf/{clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore settings/optimized workloads/fio_4M_rand_read} | 1 | |
Failure Reason:
"2023-10-17T07:17:50.565784+0000 mon.a (mon.0) 132 : cluster [WRN] Health check failed: Reduced data availability: 3 pgs inactive (PG_AVAILABILITY)" in cluster log |
||||||||||||||
fail | 7431073 | 2023-10-17 06:50:40 | 2023-10-17 06:58:38 | 2023-10-17 10:19:46 | 3:21:08 | 3:12:18 | 0:08:50 | smithi | main | centos | 9.stream | crimson-rados/rbd/{clusters/fixed-1 crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph tasks/rbd_api_tests_old_format} | 1 | |
Failure Reason:
Command failed (workunit test rbd/crimson/test_crimson_librbd.sh) on smithi071 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=a83d484198747052d1ed9eeed0a72e20b007e411 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 CRIMSON_COMPAT=1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rbd/crimson/test_crimson_librbd.sh' |
||||||||||||||
pass | 7431074 | 2023-10-17 06:50:41 | 2023-10-17 07:33:30 | 1542 | smithi | main | centos | 9.stream | crimson-rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore thrashers/default thrashosds-health workloads/radosbench-high-concurrency} | 2 | ||||
fail | 7431075 | 2023-10-17 06:50:42 | 2023-10-17 06:58:39 | 2023-10-17 07:23:39 | 0:25:00 | 0:16:22 | 0:08:38 | smithi | main | centos | 9.stream | crimson-rados/perf/{clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore settings/optimized workloads/fio_4M_rand_rw} | 1 | |
Failure Reason:
"2023-10-17T07:17:57.290219+0000 mon.a (mon.0) 126 : cluster [WRN] Health check failed: Reduced data availability: 20 pgs inactive (PG_AVAILABILITY)" in cluster log |
||||||||||||||
pass | 7431076 | 2023-10-17 06:50:42 | 2023-10-17 06:58:40 | 2023-10-17 07:42:35 | 0:43:55 | 0:26:32 | 0:17:23 | smithi | main | centos | 9.stream | crimson-rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore thrashers/default thrashosds-health workloads/radosbench} | 2 | |
pass | 7431077 | 2023-10-17 06:50:43 | 2023-10-17 07:05:11 | 2023-10-17 07:30:34 | 0:25:23 | 0:15:40 | 0:09:43 | smithi | main | centos | 9.stream | crimson-rados/perf/{clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore settings/optimized workloads/fio_4M_rand_write} | 1 | |
pass | 7431078 | 2023-10-17 06:50:44 | 2023-10-17 07:05:11 | 2023-10-17 07:37:33 | 0:32:22 | 0:16:33 | 0:15:49 | smithi | main | centos | 9.stream | crimson-rados/basic/{clusters/fixed-2 crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph tasks/rados_python} | 2 | |
fail | 7431079 | 2023-10-17 06:50:45 | 2023-10-17 07:12:42 | 2023-10-17 10:38:08 | 3:25:26 | 3:12:33 | 0:12:53 | smithi | main | centos | 9.stream | crimson-rados/rbd/{clusters/fixed-1 crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph tasks/rbd_cls_tests} | 1 | |
Failure Reason:
Command failed (workunit test cls/test_cls_lock.sh) on smithi084 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=a83d484198747052d1ed9eeed0a72e20b007e411 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 CRIMSON_COMPAT=1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_lock.sh' |
||||||||||||||
fail | 7431080 | 2023-10-17 06:50:46 | 2023-10-17 07:17:03 | 2023-10-17 07:50:44 | 0:33:41 | 0:19:31 | 0:14:10 | smithi | main | centos | 9.stream | crimson-rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore thrashers/default thrashosds-health workloads/small-objects-balanced} | 2 | |
Failure Reason:
Command crashed: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --balance-reads --max-ops 400000 --objects 1024 --max-in-flight 64 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 600 --op read 100 --op write 50 --op delete 50 --op snap_create 50 --op snap_remove 50 --op rollback 0 --op setattr 25 --op rmattr 25 --op copy_from 0 --op write_excl 50 --pool unique_pool_0' |
||||||||||||||
fail | 7431081 | 2023-10-17 06:50:46 | 2023-10-17 07:21:34 | 2023-10-17 07:46:25 | 0:24:51 | 0:13:02 | 0:11:49 | smithi | main | centos | 9.stream | crimson-rados/perf/{clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore settings/optimized workloads/radosbench_4K_rand_read} | 1 | |
Failure Reason:
"2023-10-17T07:42:26.162809+0000 mon.a (mon.0) 155 : cluster [WRN] Health check failed: Degraded data redundancy: 106/7804 objects degraded (1.358%), 8 pgs degraded (PG_DEGRADED)" in cluster log |
||||||||||||||
fail | 7431082 | 2023-10-17 06:50:47 | 2023-10-17 07:23:35 | 2023-10-17 07:55:45 | 0:32:10 | 0:19:47 | 0:12:23 | smithi | main | centos | 9.stream | crimson-rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore thrashers/default thrashosds-health workloads/small-objects-localized} | 2 | |
Failure Reason:
Command crashed: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --localize-reads --max-ops 400000 --objects 1024 --max-in-flight 64 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 600 --op read 100 --op write 50 --op delete 50 --op snap_create 50 --op snap_remove 50 --op rollback 0 --op setattr 25 --op rmattr 25 --op copy_from 0 --op write_excl 50 --pool unique_pool_0' |
||||||||||||||
fail | 7431083 | 2023-10-17 06:50:48 | 2023-10-17 07:24:15 | 2023-10-17 07:45:06 | 0:20:51 | 0:12:03 | 0:08:48 | smithi | main | centos | 9.stream | crimson-rados/perf/{clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore settings/optimized workloads/radosbench_4K_seq_read} | 1 | |
Failure Reason:
"2023-10-17T07:43:04.099027+0000 mon.a (mon.0) 153 : cluster [WRN] Health check failed: Degraded data redundancy: 109/9096 objects degraded (1.198%), 6 pgs degraded (PG_DEGRADED)" in cluster log |
||||||||||||||
pass | 7431084 | 2023-10-17 06:50:49 | 2023-10-17 07:24:15 | 2023-10-17 07:48:15 | 0:24:00 | 0:11:02 | 0:12:58 | smithi | main | centos | 9.stream | crimson-rados/rbd/{clusters/fixed-1 crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph tasks/rbd_lock_and_fence} | 1 | |
fail | 7431085 | 2023-10-17 06:50:49 | 2023-10-17 07:27:16 | 2023-10-17 07:50:15 | 0:22:59 | 0:13:05 | 0:09:54 | smithi | main | centos | 9.stream | crimson-rados/perf/{clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore settings/optimized workloads/radosbench_4M_rand_read} | 1 | |
Failure Reason:
"2023-10-17T07:46:20.129720+0000 mon.a (mon.0) 150 : cluster [WRN] Health check failed: Degraded data redundancy: 8/1662 objects degraded (0.481%), 3 pgs degraded (PG_DEGRADED)" in cluster log |
||||||||||||||
dead | 7431086 | 2023-10-17 06:50:50 | 2023-10-17 07:27:17 | 2023-10-17 19:42:58 | 12:15:41 | smithi | main | centos | 9.stream | crimson-rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore thrashers/default thrashosds-health workloads/small-objects} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 7431087 | 2023-10-17 06:50:51 | 2023-10-17 07:33:28 | 2023-10-17 07:54:39 | 0:21:11 | 0:12:28 | 0:08:43 | smithi | main | centos | 9.stream | crimson-rados/perf/{clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore settings/optimized workloads/radosbench_4M_seq_read} | 1 | |
Failure Reason:
"2023-10-17T07:52:54.844054+0000 mon.a (mon.0) 155 : cluster [WRN] Health check failed: Degraded data redundancy: 25/1412 objects degraded (1.771%), 9 pgs degraded (PG_DEGRADED)" in cluster log |
||||||||||||||
dead | 7431088 | 2023-10-17 06:50:52 | 2023-10-17 07:33:28 | 2023-10-17 19:41:57 | 12:08:29 | smithi | main | centos | 9.stream | crimson-rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore thrashers/default thrashosds-health workloads/snaps-few-objects-balanced} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 7431089 | 2023-10-17 06:50:52 | 2023-10-17 07:33:38 | 2023-10-17 08:08:45 | 0:35:07 | 0:20:53 | 0:14:14 | smithi | main | centos | 9.stream | crimson-rados/basic/{clusters/fixed-2 crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph tasks/readwrite} | 2 | |
Failure Reason:
"2023-10-17T08:00:59.382322+0000 mon.a (mon.0) 153 : cluster [WRN] Health check failed: 1 slow ops, oldest one blocked for 32 sec, mon.a has slow ops (SLOW_OPS)" in cluster log |