User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Pass | Fail |
---|---|---|---|---|---|---|---|---|---|---|
yingxin | 2023-10-11 02:13:06 | 2023-10-11 02:56:21 | 2023-10-11 06:27:10 | 3:30:49 | crimson-rados | wip-yingxin-crimson-improve-mempool | smithi | 21780a4 | 23 | 11 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
pass | 7420959 | 2023-10-11 02:13:24 | 2023-10-11 02:56:21 | 2023-10-11 03:31:42 | 0:35:21 | 0:26:37 | 0:08:44 | smithi | main | centos | 9.stream | crimson-rados/basic/{clusters/fixed-2 crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph tasks/rados_api_tests} | 2 | |
pass | 7420960 | 2023-10-11 02:13:25 | 2023-10-11 02:56:42 | 2023-10-11 03:21:47 | 0:25:05 | 0:16:45 | 0:08:20 | smithi | main | centos | 9.stream | crimson-rados/perf/{clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore settings/optimized workloads/fio_4K_rand_read} | 1 | |
fail | 7420961 | 2023-10-11 02:13:26 | 2023-10-11 02:56:42 | 2023-10-11 06:18:46 | 3:22:04 | 3:12:24 | 0:09:40 | smithi | main | centos | 9.stream | crimson-rados/rbd/{clusters/fixed-1 crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph tasks/rbd_api_tests} | 1 | |
Failure Reason:
Command failed (workunit test rbd/crimson/test_crimson_librbd.sh) on smithi031 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=21780a4fb1b788d3aac31a2a2bfba73265c986e7 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 RBD_FEATURES=61 CRIMSON_COMPAT=1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rbd/crimson/test_crimson_librbd.sh' |
||||||||||||||
pass | 7420962 | 2023-10-11 02:13:26 | 2023-10-11 02:57:52 | 2023-10-11 03:22:42 | 0:24:50 | 0:15:53 | 0:08:57 | smithi | main | centos | 9.stream | crimson-rados/singleton/{all/osd-backfill crimson-supported-all-distro/centos_latest crimson_qa_overrides objectstore/bluestore rados} | 1 | |
fail | 7420963 | 2023-10-11 02:13:27 | 2023-10-11 02:57:53 | 2023-10-11 03:29:15 | 0:31:22 | 0:21:48 | 0:09:34 | smithi | main | centos | 9.stream | crimson-rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore thrashers/default thrashosds-health workloads/admin_socket_objecter_requests} | 2 | |
Failure Reason:
Command failed on smithi006 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph status --format=json' |
||||||||||||||
pass | 7420964 | 2023-10-11 02:13:27 | 2023-10-11 02:58:33 | 2023-10-11 03:25:12 | 0:26:39 | 0:15:29 | 0:11:10 | smithi | main | centos | 9.stream | crimson-rados/perf/{clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore settings/optimized workloads/fio_4K_rand_rw} | 1 | |
pass | 7420965 | 2023-10-11 02:13:28 | 2023-10-11 02:59:54 | 2023-10-11 03:37:16 | 0:37:22 | 0:27:55 | 0:09:27 | smithi | main | centos | 9.stream | crimson-rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore thrashers/default thrashosds-health workloads/pool-snaps-few-objects} | 2 | |
pass | 7420966 | 2023-10-11 02:13:29 | 2023-10-11 02:59:54 | 2023-10-11 03:25:29 | 0:25:35 | 0:14:36 | 0:10:59 | smithi | main | centos | 9.stream | crimson-rados/perf/{clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore settings/optimized workloads/fio_4M_rand_read} | 1 | |
fail | 7420967 | 2023-10-11 02:13:29 | 2023-10-11 03:02:25 | 2023-10-11 06:27:10 | 3:24:45 | 3:12:09 | 0:12:36 | smithi | main | centos | 9.stream | crimson-rados/rbd/{clusters/fixed-1 crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph tasks/rbd_api_tests_old_format} | 1 | |
Failure Reason:
Command failed (workunit test rbd/crimson/test_crimson_librbd.sh) on smithi123 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=21780a4fb1b788d3aac31a2a2bfba73265c986e7 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 CRIMSON_COMPAT=1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rbd/crimson/test_crimson_librbd.sh' |
||||||||||||||
pass | 7420968 | 2023-10-11 02:13:30 | 2023-10-11 03:04:05 | 2023-10-11 03:39:06 | 0:35:01 | 0:25:07 | 0:09:54 | smithi | main | centos | 9.stream | crimson-rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore thrashers/default thrashosds-health workloads/radosbench-high-concurrency} | 2 | |
pass | 7420969 | 2023-10-11 02:13:31 | 2023-10-11 03:04:05 | 2023-10-11 03:29:20 | 0:25:15 | 0:16:01 | 0:09:14 | smithi | main | centos | 9.stream | crimson-rados/perf/{clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore settings/optimized workloads/fio_4M_rand_rw} | 1 | |
pass | 7420970 | 2023-10-11 02:13:31 | 2023-10-11 03:04:26 | 2023-10-11 03:40:41 | 0:36:15 | 0:25:57 | 0:10:18 | smithi | main | centos | 9.stream | crimson-rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore thrashers/default thrashosds-health workloads/radosbench} | 2 | |
pass | 7420971 | 2023-10-11 02:13:32 | 2023-10-11 03:05:16 | 2023-10-11 03:30:22 | 0:25:06 | 0:15:35 | 0:09:31 | smithi | main | centos | 9.stream | crimson-rados/perf/{clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore settings/optimized workloads/fio_4M_rand_write} | 1 | |
pass | 7420972 | 2023-10-11 02:13:33 | 2023-10-11 03:05:17 | 2023-10-11 03:31:10 | 0:25:53 | 0:16:49 | 0:09:04 | smithi | main | centos | 9.stream | crimson-rados/basic/{clusters/fixed-2 crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph tasks/rados_python} | 2 | |
pass | 7420973 | 2023-10-11 02:13:33 | 2023-10-11 03:06:37 | 2023-10-11 03:30:17 | 0:23:40 | 0:12:14 | 0:11:26 | smithi | main | centos | 9.stream | crimson-rados/rbd/{clusters/fixed-1 crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph tasks/rbd_cls_tests} | 1 | |
pass | 7420974 | 2023-10-11 02:13:34 | 2023-10-11 03:09:28 | 2023-10-11 03:38:52 | 0:29:24 | 0:20:19 | 0:09:05 | smithi | main | centos | 9.stream | crimson-rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore thrashers/default thrashosds-health workloads/small-objects-balanced} | 2 | |
fail | 7420975 | 2023-10-11 02:13:35 | 2023-10-11 03:09:48 | 2023-10-11 03:33:24 | 0:23:36 | 0:12:46 | 0:10:50 | smithi | main | centos | 9.stream | crimson-rados/perf/{clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore settings/optimized workloads/radosbench_4K_rand_read} | 1 | |
Failure Reason:
"2023-10-11T03:31:07.515752+0000 mon.a (mon.0) 151 : cluster [WRN] Health check failed: Degraded data redundancy: 121/11276 objects degraded (1.073%), 6 pgs degraded (PG_DEGRADED)" in cluster log |
||||||||||||||
pass | 7420976 | 2023-10-11 02:13:35 | 2023-10-11 03:12:19 | 2023-10-11 03:43:15 | 0:30:56 | 0:20:22 | 0:10:34 | smithi | main | centos | 9.stream | crimson-rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore thrashers/default thrashosds-health workloads/small-objects-localized} | 2 | |
fail | 7420977 | 2023-10-11 02:13:36 | 2023-10-11 03:14:09 | 2023-10-11 03:35:08 | 0:20:59 | 0:12:02 | 0:08:57 | smithi | main | centos | 9.stream | crimson-rados/perf/{clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore settings/optimized workloads/radosbench_4K_seq_read} | 1 | |
Failure Reason:
"2023-10-11T03:33:07.144005+0000 mon.a (mon.0) 148 : cluster [WRN] Health check failed: Degraded data redundancy: 134/11572 objects degraded (1.158%), 7 pgs degraded (PG_DEGRADED)" in cluster log |
||||||||||||||
pass | 7420978 | 2023-10-11 02:13:37 | 2023-10-11 03:14:10 | 2023-10-11 03:36:05 | 0:21:55 | 0:11:12 | 0:10:43 | smithi | main | centos | 9.stream | crimson-rados/rbd/{clusters/fixed-1 crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph tasks/rbd_lock_and_fence} | 1 | |
fail | 7420979 | 2023-10-11 02:13:37 | 2023-10-11 03:15:00 | 2023-10-11 03:37:59 | 0:22:59 | 0:13:02 | 0:09:57 | smithi | main | centos | 9.stream | crimson-rados/perf/{clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore settings/optimized workloads/radosbench_4M_rand_read} | 1 | |
Failure Reason:
"2023-10-11T03:33:56.545993+0000 mon.a (mon.0) 152 : cluster [WRN] Health check failed: Degraded data redundancy: 32/1724 objects degraded (1.856%), 9 pgs degraded (PG_DEGRADED)" in cluster log |
||||||||||||||
pass | 7420980 | 2023-10-11 02:13:38 | 2023-10-11 03:15:01 | 2023-10-11 03:44:48 | 0:29:47 | 0:20:12 | 0:09:35 | smithi | main | centos | 9.stream | crimson-rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore thrashers/default thrashosds-health workloads/small-objects} | 2 | |
fail | 7420981 | 2023-10-11 02:13:39 | 2023-10-11 03:15:51 | 2023-10-11 03:36:50 | 0:20:59 | 0:12:06 | 0:08:53 | smithi | main | centos | 9.stream | crimson-rados/perf/{clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore settings/optimized workloads/radosbench_4M_seq_read} | 1 | |
Failure Reason:
"2023-10-11T03:34:43.161846+0000 mon.a (mon.0) 152 : cluster [WRN] Health check failed: Degraded data redundancy: 21/1826 objects degraded (1.150%), 8 pgs degraded (PG_DEGRADED)" in cluster log |
||||||||||||||
pass | 7420982 | 2023-10-11 02:13:39 | 2023-10-11 03:15:51 | 2023-10-11 03:56:48 | 0:40:57 | 0:28:29 | 0:12:28 | smithi | main | centos | 9.stream | crimson-rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore thrashers/default thrashosds-health workloads/snaps-few-objects-balanced} | 2 | |
pass | 7420983 | 2023-10-11 02:13:40 | 2023-10-11 03:19:32 | 2023-10-11 03:42:48 | 0:23:16 | 0:14:02 | 0:09:14 | smithi | main | centos | 9.stream | crimson-rados/basic/{clusters/fixed-2 crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph tasks/readwrite} | 2 | |
pass | 7420984 | 2023-10-11 02:13:40 | 2023-10-11 03:19:43 | 2023-10-11 03:50:42 | 0:30:59 | 0:22:17 | 0:08:42 | smithi | main | centos | 9.stream | crimson-rados/rbd/{clusters/fixed-1 crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph tasks/rbd_python_api_tests} | 1 | |
fail | 7420985 | 2023-10-11 02:13:41 | 2023-10-11 03:19:43 | 2023-10-11 03:42:52 | 0:23:09 | 0:11:41 | 0:11:28 | smithi | main | centos | 9.stream | crimson-rados/perf/{clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore settings/optimized workloads/radosbench_4M_write} | 1 | |
Failure Reason:
"2023-10-11T03:40:49.497969+0000 mon.a (mon.0) 151 : cluster [WRN] Health check failed: Degraded data redundancy: 29/1530 objects degraded (1.895%), 6 pgs degraded (PG_DEGRADED)" in cluster log |
||||||||||||||
fail | 7420986 | 2023-10-11 02:13:42 | 2023-10-11 03:21:54 | 2023-10-11 03:59:09 | 0:37:15 | 0:27:36 | 0:09:39 | smithi | main | centos | 9.stream | crimson-rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore thrashers/default thrashosds-health workloads/snaps-few-objects-localized} | 2 | |
Failure Reason:
Command failed on smithi170 with status 6: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph tell osd.3 flush_pg_stats' |
||||||||||||||
fail | 7420987 | 2023-10-11 02:13:42 | 2023-10-11 03:22:04 | 2023-10-11 06:24:05 | 3:02:01 | 2:51:43 | 0:10:18 | smithi | main | centos | 9.stream | crimson-rados/perf/{clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore settings/optimized workloads/radosbench_omap_write} | 1 | |
Failure Reason:
reached maximum tries (1551) after waiting for 9300 seconds |
||||||||||||||
pass | 7420988 | 2023-10-11 02:13:43 | 2023-10-11 03:22:34 | 2023-10-11 04:01:45 | 0:39:11 | 0:29:11 | 0:10:00 | smithi | main | centos | 9.stream | crimson-rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore thrashers/default thrashosds-health workloads/snaps-few-objects} | 2 | |
pass | 7420989 | 2023-10-11 02:13:44 | 2023-10-11 03:22:35 | 2023-10-11 03:51:25 | 0:28:50 | 0:20:11 | 0:08:39 | smithi | main | centos | 9.stream | crimson-rados/rbd/{clusters/fixed-1 crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph tasks/rbd_python_api_tests_old_format} | 1 | |
pass | 7420990 | 2023-10-11 02:13:44 | 2023-10-11 03:22:45 | 2023-10-11 03:47:46 | 0:25:01 | 0:16:13 | 0:08:48 | smithi | main | centos | 9.stream | crimson-rados/perf/{clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore settings/optimized workloads/sample_fio} | 1 | |
pass | 7420991 | 2023-10-11 02:13:45 | 2023-10-11 03:22:55 | 2023-10-11 03:50:50 | 0:27:55 | 0:14:57 | 0:12:58 | smithi | main | centos | 9.stream | crimson-rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore thrashers/default thrashosds-health workloads/write_fadvise_dontneed} | 2 | |
fail | 7420992 | 2023-10-11 02:13:45 | 2023-10-11 03:25:36 | 2023-10-11 03:49:20 | 0:23:44 | 0:12:05 | 0:11:39 | smithi | main | centos | 9.stream | crimson-rados/perf/{clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore settings/optimized workloads/sample_radosbench} | 1 | |
Failure Reason:
"2023-10-11T03:47:53.246022+0000 mon.a (mon.0) 155 : cluster [WRN] Health check failed: Degraded data redundancy: 131/9796 objects degraded (1.337%), 7 pgs degraded (PG_DEGRADED)" in cluster log |