User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Pass | Fail | Dead |
---|---|---|---|---|---|---|---|---|---|---|---|
sjust | 2023-10-16 23:13:53 | 2023-10-16 23:14:47 | 2023-10-17 11:38:10 | 12:23:23 | crimson-rados | wip-crimson-scrub-testing-2023-10-15 | smithi | 06f14df | 11 | 12 | 2 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
pass | 7430668 | 2023-10-16 23:14:03 | 2023-10-16 23:14:47 | 2023-10-16 23:51:52 | 0:37:05 | 0:28:37 | 0:08:28 | smithi | main | centos | 9.stream | crimson-rados/basic/{clusters/fixed-2 crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph tasks/rados_api_tests} | 2 | |
pass | 7430669 | 2023-10-16 23:14:04 | 2023-10-16 23:18:39 | 2023-10-16 23:45:21 | 0:26:42 | 0:16:24 | 0:10:18 | smithi | main | centos | 9.stream | crimson-rados/perf/{clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore settings/optimized workloads/fio_4K_rand_read} | 1 | |
fail | 7430670 | 2023-10-16 23:14:05 | 2023-10-16 23:18:40 | 2023-10-17 02:41:47 | 3:23:07 | 3:12:24 | 0:10:43 | smithi | main | centos | 9.stream | crimson-rados/rbd/{clusters/fixed-1 crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph tasks/rbd_api_tests} | 1 | |
Failure Reason:
Command failed (workunit test rbd/crimson/test_crimson_librbd.sh) on smithi083 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=06f14dfbeeb0dce341775222d82ea9644e37c496 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 RBD_FEATURES=61 CRIMSON_COMPAT=1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rbd/crimson/test_crimson_librbd.sh' |
||||||||||||||
pass | 7430671 | 2023-10-16 23:14:05 | 2023-10-16 23:20:30 | 2023-10-16 23:45:20 | 0:24:50 | 0:15:52 | 0:08:58 | smithi | main | centos | 9.stream | crimson-rados/singleton/{all/osd-backfill crimson-supported-all-distro/centos_latest crimson_qa_overrides objectstore/bluestore rados} | 1 | |
pass | 7430672 | 2023-10-16 23:14:06 | 2023-10-16 23:20:30 | 2023-10-16 23:46:55 | 0:26:25 | 0:16:09 | 0:10:16 | smithi | main | centos | 9.stream | crimson-rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore thrashers/default thrashosds-health workloads/admin_socket_objecter_requests} | 2 | |
pass | 7430673 | 2023-10-16 23:14:07 | 2023-10-16 23:21:51 | 2023-10-16 23:46:50 | 0:24:59 | 0:16:12 | 0:08:47 | smithi | main | centos | 9.stream | crimson-rados/perf/{clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore settings/optimized workloads/fio_4K_rand_rw} | 1 | |
dead | 7430674 | 2023-10-16 23:14:08 | 2023-10-16 23:21:51 | 2023-10-17 11:32:13 | 12:10:22 | smithi | main | centos | 9.stream | crimson-rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore thrashers/default thrashosds-health workloads/pool-snaps-few-objects} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 7430675 | 2023-10-16 23:14:08 | 2023-10-16 23:22:12 | 2023-10-16 23:49:46 | 0:27:34 | 0:15:19 | 0:12:15 | smithi | main | centos | 9.stream | crimson-rados/perf/{clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore settings/optimized workloads/fio_4M_rand_read} | 1 | |
fail | 7430676 | 2023-10-16 23:14:09 | 2023-10-16 23:24:22 | 2023-10-17 02:47:42 | 3:23:20 | 3:12:55 | 0:10:25 | smithi | main | centos | 9.stream | crimson-rados/rbd/{clusters/fixed-1 crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph tasks/rbd_api_tests_old_format} | 1 | |
Failure Reason:
Command failed (workunit test rbd/crimson/test_crimson_librbd.sh) on smithi046 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=06f14dfbeeb0dce341775222d82ea9644e37c496 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 CRIMSON_COMPAT=1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rbd/crimson/test_crimson_librbd.sh' |
||||||||||||||
pass | 7430677 | 2023-10-16 23:14:10 | 2023-10-16 23:24:23 | 2023-10-17 00:02:48 | 0:38:25 | 0:25:51 | 0:12:34 | smithi | main | centos | 9.stream | crimson-rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore thrashers/default thrashosds-health workloads/radosbench-high-concurrency} | 2 | |
pass | 7430678 | 2023-10-16 23:14:11 | 2023-10-16 23:27:43 | 2023-10-16 23:54:19 | 0:26:36 | 0:16:31 | 0:10:05 | smithi | main | centos | 9.stream | crimson-rados/perf/{clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore settings/optimized workloads/fio_4M_rand_rw} | 1 | |
pass | 7430679 | 2023-10-16 23:14:12 | 2023-10-16 23:27:44 | 2023-10-17 00:04:44 | 0:37:00 | 0:26:53 | 0:10:07 | smithi | main | centos | 9.stream | crimson-rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore thrashers/default thrashosds-health workloads/radosbench} | 2 | |
pass | 7430680 | 2023-10-16 23:14:12 | 2023-10-16 23:27:44 | 2023-10-16 23:54:55 | 0:27:11 | 0:16:38 | 0:10:33 | smithi | main | centos | 9.stream | crimson-rados/perf/{clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore settings/optimized workloads/fio_4M_rand_write} | 1 | |
pass | 7430681 | 2023-10-16 23:14:13 | 2023-10-16 23:28:04 | 2023-10-16 23:57:00 | 0:28:56 | 0:17:18 | 0:11:38 | smithi | main | centos | 9.stream | crimson-rados/basic/{clusters/fixed-2 crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph tasks/rados_python} | 2 | |
fail | 7430682 | 2023-10-16 23:14:14 | 2023-10-16 23:29:45 | 2023-10-17 02:52:27 | 3:22:42 | 3:13:26 | 0:09:16 | smithi | main | centos | 9.stream | crimson-rados/rbd/{clusters/fixed-1 crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph tasks/rbd_cls_tests} | 1 | |
Failure Reason:
Command failed (workunit test cls/test_cls_lock.sh) on smithi053 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=06f14dfbeeb0dce341775222d82ea9644e37c496 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 CRIMSON_COMPAT=1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_lock.sh' |
||||||||||||||
dead | 7430683 | 2023-10-16 23:14:15 | 2023-10-16 23:29:45 | 2023-10-17 11:38:10 | 12:08:25 | smithi | main | centos | 9.stream | crimson-rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore thrashers/default thrashosds-health workloads/small-objects-balanced} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 7430684 | 2023-10-16 23:14:15 | 2023-10-16 23:29:46 | 2023-10-16 23:53:23 | 0:23:37 | 0:13:44 | 0:09:53 | smithi | main | centos | 9.stream | crimson-rados/perf/{clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore settings/optimized workloads/radosbench_4K_rand_read} | 1 | |
Failure Reason:
"2023-10-16T23:50:35.724004+0000 mon.a (mon.0) 151 : cluster [WRN] Health check failed: Degraded data redundancy: 64/9668 objects degraded (0.662%), 3 pgs degraded (PG_DEGRADED)" in cluster log |
||||||||||||||
fail | 7430685 | 2023-10-16 23:14:16 | 2023-10-16 23:30:46 | 2023-10-17 00:04:49 | 0:34:03 | 0:23:06 | 0:10:57 | smithi | main | centos | 9.stream | crimson-rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore thrashers/default thrashosds-health workloads/small-objects-localized} | 2 | |
Failure Reason:
Command crashed: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --localize-reads --max-ops 400000 --objects 1024 --max-in-flight 64 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 600 --op read 100 --op write 50 --op delete 50 --op snap_create 50 --op snap_remove 50 --op rollback 0 --op setattr 25 --op rmattr 25 --op copy_from 0 --op write_excl 50 --pool unique_pool_0' |
||||||||||||||
fail | 7430686 | 2023-10-16 23:14:17 | 2023-10-16 23:32:27 | 2023-10-16 23:55:48 | 0:23:21 | 0:13:34 | 0:09:47 | smithi | main | centos | 9.stream | crimson-rados/perf/{clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore settings/optimized workloads/radosbench_4K_seq_read} | 1 | |
Failure Reason:
"2023-10-16T23:53:49.738934+0000 mon.a (mon.0) 149 : cluster [WRN] Health check failed: Degraded data redundancy: 93/10100 objects degraded (0.921%), 5 pgs degraded (PG_DEGRADED)" in cluster log |
||||||||||||||
fail | 7430687 | 2023-10-16 23:14:18 | 2023-10-16 23:32:27 | 2023-10-16 23:56:39 | 0:24:12 | 0:15:43 | 0:08:29 | smithi | main | centos | 9.stream | crimson-rados/rbd/{clusters/fixed-1 crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph tasks/rbd_lock_and_fence} | 1 | |
Failure Reason:
"2023-10-16T23:51:56.072080+0000 mon.a (mon.0) 134 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)" in cluster log |
||||||||||||||
fail | 7430688 | 2023-10-16 23:14:19 | 2023-10-16 23:32:27 | 2023-10-16 23:55:20 | 0:22:53 | 0:13:41 | 0:09:12 | smithi | main | centos | 9.stream | crimson-rados/perf/{clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore settings/optimized workloads/radosbench_4M_rand_read} | 1 | |
Failure Reason:
"2023-10-16T23:52:19.725996+0000 mon.a (mon.0) 151 : cluster [WRN] Health check failed: Degraded data redundancy: 21/1540 objects degraded (1.364%), 10 pgs degraded (PG_DEGRADED)" in cluster log |
||||||||||||||
fail | 7430689 | 2023-10-16 23:14:20 | 2023-10-16 23:32:28 | 2023-10-17 00:06:34 | 0:34:06 | 0:21:56 | 0:12:10 | smithi | main | centos | 9.stream | crimson-rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore thrashers/default thrashosds-health workloads/small-objects} | 2 | |
Failure Reason:
Command failed on smithi158 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph pg dump --format=json' |
||||||||||||||
fail | 7430690 | 2023-10-16 23:14:20 | 2023-10-16 23:32:58 | 2023-10-16 23:56:14 | 0:23:16 | 0:12:42 | 0:10:34 | smithi | main | centos | 9.stream | crimson-rados/perf/{clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore settings/optimized workloads/radosbench_4M_seq_read} | 1 | |
Failure Reason:
"2023-10-16T23:53:55.052335+0000 mon.a (mon.0) 151 : cluster [WRN] Health check failed: Degraded data redundancy: 17/1600 objects degraded (1.062%), 9 pgs degraded (PG_DEGRADED)" in cluster log |
||||||||||||||
fail | 7430691 | 2023-10-16 23:14:21 | 2023-10-16 23:33:08 | 2023-10-17 00:08:30 | 0:35:22 | 0:22:30 | 0:12:52 | smithi | main | centos | 9.stream | crimson-rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore thrashers/default thrashosds-health workloads/snaps-few-objects-balanced} | 2 | |
Failure Reason:
Command crashed: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --balance-reads --max-ops 4000 --objects 50 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op read 100 --op write 50 --op delete 50 --op snap_create 50 --op snap_remove 50 --op rollback 0 --op copy_from 0 --op write_excl 50 --pool unique_pool_0' |
||||||||||||||
fail | 7430692 | 2023-10-16 23:14:22 | 2023-10-16 23:35:39 | 2023-10-17 00:11:02 | 0:35:23 | 0:21:53 | 0:13:30 | smithi | main | centos | 9.stream | crimson-rados/basic/{clusters/fixed-2 crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph tasks/readwrite} | 2 | |
Failure Reason:
Command failed on smithi132 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph pg dump --format=json' |