User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Pass | Fail |
---|---|---|---|---|---|---|---|---|---|---|
pdonnell | 2023-10-26 05:21:22 | 2023-10-26 05:25:06 | 2023-10-26 15:33:36 | 10:08:30 | fs | wip-batrick-testing-20231024.144545 | smithi | 3849936 | 5 | 25 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
fail | 7438442 | 2023-10-26 05:21:39 | 2023-10-26 05:25:06 | 2023-10-26 07:33:44 | 2:08:38 | 1:58:50 | 0:09:48 | smithi | main | centos | 9.stream | fs/mirror/{begin/{0-install 1-ceph 2-logrotate} cephfs-mirror/one-per-cluster clients/{mirror} cluster/{1-node} mount/fuse objectstore/bluestore-bitmap overrides/{ignorelist_health} supported-random-distros$/{centos_latest} tasks/mirror} | 1 | |
Failure Reason:
"2023-10-26T06:00:30.954633+0000 mon.a (mon.0) 2166 : cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log |
||||||||||||||
fail | 7438443 | 2023-10-26 05:21:40 | 2023-10-26 05:26:37 | 2023-10-26 06:10:06 | 0:43:29 | 0:33:19 | 0:10:10 | smithi | main | centos | 9.stream | fs/multifs/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-2c-client conf/{client mds mon osd} distro/{centos_latest} mount/fuse objectstore-ec/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down mon-debug} tasks/multifs-auth} | 2 | |
Failure Reason:
"2023-10-26T05:54:25.824349+0000 mon.a (mon.0) 1345 : cluster [WRN] Health check failed: Reduced data availability: 1 pg inactive, 1 pg peering (PG_AVAILABILITY)" in cluster log |
||||||||||||||
fail | 7438444 | 2023-10-26 05:21:41 | 2023-10-26 05:27:07 | 2023-10-26 07:42:23 | 2:15:16 | 2:05:30 | 0:09:46 | smithi | main | centos | 9.stream | fs/valgrind/{begin/{0-install 1-ceph 2-logrotate} centos_latest debug mirror/{cephfs-mirror/one-per-cluster clients/mirror cluster/1-node mount/fuse overrides/ignorelist_health tasks/mirror}} | 1 | |
Failure Reason:
"2023-10-26T06:05:47.282692+0000 mon.a (mon.0) 2164 : cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log |
||||||||||||||
fail | 7438445 | 2023-10-26 05:21:41 | 2023-10-26 05:27:07 | 2023-10-26 06:01:33 | 0:34:26 | 0:23:21 | 0:11:05 | smithi | main | ubuntu | 22.04 | fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/fuse objectstore/bluestore-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/mds-full} | 2 | |
Failure Reason:
"2023-10-26T05:47:08.297699+0000 osd.5 (osd.5) 3 : cluster [WRN] OSD bench result of 164877.815646 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.5. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd]." in cluster log |
||||||||||||||
fail | 7438446 | 2023-10-26 05:21:42 | 2023-10-26 05:27:48 | 2023-10-26 06:26:14 | 0:58:26 | 0:48:16 | 0:10:10 | smithi | main | rhel | 8.6 | fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} ms_mode/secure wsync/yes} objectstore-ec/bluestore-comp-ec-root omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{export-check n/5 replication/default} standby-replay tasks/{0-subvolume/{with-namespace-isolated} 1-check-counter 2-scrub/yes 3-snaps/no 4-flush/yes 5-workunit/postgres}} | 3 | |
Failure Reason:
error during scrub thrashing: rank damage found: {'backtrace'} |
||||||||||||||
fail | 7438447 | 2023-10-26 05:21:43 | 2023-10-26 05:30:59 | 2023-10-26 06:14:44 | 0:43:45 | 0:32:10 | 0:11:35 | smithi | main | centos | 9.stream | fs/mixed-clients/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-2c-client conf/{client mds mon osd} distro/{centos_latest} kclient-overrides/{distro/testing/k-testing ms-die-on-skipped} objectstore-ec/bluestore-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down osd-asserts} tasks/kernel_cfuse_workunits_untarbuild_blogbench} | 2 | |
Failure Reason:
Command failed (workunit test kernel_untar_build.sh) on smithi155 with status 2: 'mkdir -p -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && cd -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=384993681abceb8cb8affea10538d607052db4b4 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="1" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.1 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.1 CEPH_MNT=/home/ubuntu/cephtest/mnt.1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.1/qa/workunits/kernel_untar_build.sh' |
||||||||||||||
fail | 7438448 | 2023-10-26 05:21:44 | 2023-10-26 05:31:19 | 2023-10-26 06:43:47 | 1:12:28 | 1:03:55 | 0:08:33 | smithi | main | rhel | 8.6 | fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} ms_mode/crc wsync/yes} objectstore-ec/bluestore-ec-root omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{export-check n/3 replication/default} standby-replay tasks/{0-subvolume/{with-quota} 1-check-counter 2-scrub/yes 3-snaps/no 4-flush/yes 5-workunit/suites/dbench}} | 3 | |
Failure Reason:
error during scrub thrashing: rank damage found: {'backtrace'} |
||||||||||||||
fail | 7438449 | 2023-10-26 05:21:44 | 2023-10-26 05:32:50 | 2023-10-26 06:16:30 | 0:43:40 | 0:31:51 | 0:11:49 | smithi | main | ubuntu | 22.04 | fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} objectstore/bluestore-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/metrics} | 2 | |
Failure Reason:
"2023-10-26T06:10:20.661255+0000 mon.a (mon.0) 2517 : cluster [WRN] Health check failed: Reduced data availability: 1 pg inactive, 1 pg peering (PG_AVAILABILITY)" in cluster log |
||||||||||||||
pass | 7438450 | 2023-10-26 05:21:45 | 2023-10-26 05:35:01 | 2023-10-26 07:02:11 | 1:27:10 | 1:16:23 | 0:10:47 | smithi | main | centos | 9.stream | fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{centos_latest} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} msgr-failures/none objectstore-ec/bluestore-bitmap overrides/{client-shutdown frag ignorelist_health ignorelist_wrongly_marked_down prefetch_dirfrags/yes prefetch_entire_dirfrags/yes races session_timeout thrashosds-health} ranks/5 tasks/{1-thrash/osd 2-workunit/fs/snaps}} | 2 | |
fail | 7438451 | 2023-10-26 05:21:46 | 2023-10-26 05:35:01 | 2023-10-26 06:31:39 | 0:56:38 | 0:43:22 | 0:13:16 | smithi | main | rhel | 8.6 | fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/legacy wsync/no} objectstore-ec/bluestore-bitmap omap_limit/10 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/1 standby-replay tasks/{0-subvolume/{with-namespace-isolated} 1-check-counter 2-scrub/no 3-snaps/yes 4-flush/no 5-workunit/suites/ffsb}} | 3 | |
Failure Reason:
Command failed (workunit test suites/ffsb.sh) on smithi033 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=384993681abceb8cb8affea10538d607052db4b4 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/ffsb.sh' |
||||||||||||||
pass | 7438452 | 2023-10-26 05:21:47 | 2023-10-26 05:38:22 | 2023-10-26 06:13:55 | 0:35:33 | 0:28:11 | 0:07:22 | smithi | main | rhel | 8.6 | fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} ms_mode/secure wsync/yes} objectstore-ec/bluestore-comp-ec-root omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{export-check n/3 replication/always} standby-replay tasks/{0-subvolume/{with-no-extra-options} 1-check-counter 2-scrub/yes 3-snaps/no 4-flush/yes 5-workunit/fs/norstats}} | 3 | |
fail | 7438453 | 2023-10-26 05:21:48 | 2023-10-26 05:38:32 | 2023-10-26 07:10:07 | 1:31:35 | 1:21:16 | 0:10:19 | smithi | main | centos | 9.stream | fs/verify/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{centos_latest} mount/fuse objectstore-ec/bluestore-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down mon-debug session_timeout} ranks/5 tasks/fsstress validater/valgrind} | 2 | |
Failure Reason:
saw valgrind issues |
||||||||||||||
fail | 7438454 | 2023-10-26 05:21:48 | 2023-10-26 05:38:43 | 2023-10-26 06:19:50 | 0:41:07 | 0:32:57 | 0:08:10 | smithi | main | rhel | 8.6 | fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} ms_mode/crc wsync/yes} objectstore-ec/bluestore-ec-root omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{export-check n/5 replication/always} standby-replay tasks/{0-subvolume/{with-namespace-isolated} 1-check-counter 2-scrub/yes 3-snaps/no 4-flush/yes 5-workunit/suites/fsx}} | 3 | |
Failure Reason:
error during scrub thrashing: rank damage found: {'backtrace'} |
||||||||||||||
fail | 7438455 | 2023-10-26 05:21:49 | 2023-10-26 05:40:44 | 2023-10-26 06:35:05 | 0:54:21 | 0:46:19 | 0:08:02 | smithi | main | rhel | 8.6 | fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} ms_mode/crc wsync/yes} objectstore-ec/bluestore-ec-root omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{export-check n/3 replication/always} standby-replay tasks/{0-subvolume/{with-namespace-isolated} 1-check-counter 2-scrub/yes 3-snaps/no 4-flush/yes 5-workunit/kernel_untar_build}} | 3 | |
Failure Reason:
error during scrub thrashing: rank damage found: {'backtrace'} |
||||||||||||||
fail | 7438456 | 2023-10-26 05:21:50 | 2023-10-26 05:41:34 | 2023-10-26 06:32:16 | 0:50:42 | 0:42:46 | 0:07:56 | smithi | main | rhel | 8.6 | fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} ms_mode/secure wsync/yes} objectstore-ec/bluestore-comp-ec-root omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{export-check n/5 replication/always} standby-replay tasks/{0-subvolume/{no-subvolume} 1-check-counter 2-scrub/yes 3-snaps/no 4-flush/yes 5-workunit/suites/blogbench}} | 3 | |
Failure Reason:
error during scrub thrashing: rank damage found: {'backtrace'} |
||||||||||||||
fail | 7438457 | 2023-10-26 05:21:51 | 2023-10-26 05:42:55 | 2023-10-26 06:35:04 | 0:52:09 | 0:45:05 | 0:07:04 | smithi | main | rhel | 8.6 | fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} ms_mode/crc wsync/yes} objectstore-ec/bluestore-ec-root omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/1 standby-replay tasks/{0-subvolume/{with-namespace-isolated-and-quota} 1-check-counter 2-scrub/yes 3-snaps/no 4-flush/yes 5-workunit/suites/ffsb}} | 3 | |
Failure Reason:
error during scrub thrashing: Command failed on smithi019 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph tell mds.1:0 scrub status' |
||||||||||||||
fail | 7438458 | 2023-10-26 05:21:51 | 2023-10-26 05:43:55 | 2023-10-26 06:51:35 | 1:07:40 | 0:54:51 | 0:12:49 | smithi | main | ubuntu | 22.04 | fs/verify/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu/{overrides ubuntu_latest}} mount/kclient/{k-testing mount ms-die-on-skipped} objectstore-ec/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down mon-debug session_timeout} ranks/1 tasks/dbench validater/valgrind} | 2 | |
Failure Reason:
saw valgrind issues |
||||||||||||||
fail | 7438459 | 2023-10-26 05:21:52 | 2023-10-26 05:45:16 | 2023-10-26 06:20:16 | 0:35:00 | 0:25:43 | 0:09:17 | smithi | main | centos | 9.stream | fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{centos_latest} mount/fuse objectstore/bluestore-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/snapshots} | 2 | |
Failure Reason:
Test failure: test_kill_mdstable (tasks.cephfs.test_snapshots.TestSnapshots), test_kill_mdstable (tasks.cephfs.test_snapshots.TestSnapshots) |
||||||||||||||
fail | 7438460 | 2023-10-26 05:21:53 | 2023-10-26 05:45:16 | 2023-10-26 15:33:36 | 9:48:20 | 9:34:01 | 0:14:19 | smithi | main | ubuntu | 22.04 | fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/xfstests-dev} | 2 | |
Failure Reason:
Test failure: test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) |
||||||||||||||
fail | 7438461 | 2023-10-26 05:21:54 | 2023-10-26 05:47:57 | 2023-10-26 07:47:06 | 1:59:09 | 1:49:02 | 0:10:07 | smithi | main | rhel | 8.6 | fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/fuse objectstore-ec/bluestore-comp omap_limit/10 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{export-check n/3 replication/always} standby-replay tasks/{0-subvolume/{no-subvolume} 1-check-counter 2-scrub/yes 3-snaps/no 4-flush/no 5-workunit/kernel_untar_build}} | 3 | |
Failure Reason:
error during scrub thrashing: rank damage found: {'backtrace'} |
||||||||||||||
fail | 7438462 | 2023-10-26 05:21:55 | 2023-10-26 05:51:18 | 2023-10-26 06:52:18 | 1:01:00 | 0:45:00 | 0:16:00 | smithi | main | rhel | 8.6 | fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{centos_latest} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/admin} | 2 | |
Failure Reason:
Test failure: test_idem_unaffected_root_squash (tasks.cephfs.test_admin.TestFsAuthorizeUpdate) |
||||||||||||||
fail | 7438463 | 2023-10-26 05:21:55 | 2023-10-26 06:01:09 | 2023-10-26 07:22:16 | 1:21:07 | 1:11:10 | 0:09:57 | smithi | main | centos | 9.stream | fs/multifs/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-2c-client conf/{client mds mon osd} distro/{centos_latest} mount/fuse objectstore-ec/bluestore-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down mon-debug} tasks/failover} | 2 | |
Failure Reason:
"2023-10-26T06:35:59.434146+0000 mon.a (mon.0) 3643 : cluster [WRN] Health check failed: Reduced data availability: 2 pgs inactive, 2 pgs peering (PG_AVAILABILITY)" in cluster log |
||||||||||||||
pass | 7438464 | 2023-10-26 05:21:56 | 2023-10-26 06:01:10 | 2023-10-26 06:46:35 | 0:45:25 | 0:38:27 | 0:06:58 | smithi | main | rhel | 8.6 | fs/snaps/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-1c-client conf/{client mds mon osd} distro/{centos_latest} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} objectstore-ec/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down} tasks/workunit/snaps} | 2 | |
fail | 7438465 | 2023-10-26 05:21:57 | 2023-10-26 06:01:40 | 2023-10-26 07:00:29 | 0:58:49 | 0:48:17 | 0:10:32 | smithi | main | ubuntu | 22.04 | fs/verify/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu/{overrides ubuntu_latest}} mount/kclient/{k-testing mount ms-die-on-skipped} objectstore-ec/bluestore-comp overrides/{ignorelist_health ignorelist_wrongly_marked_down mon-debug session_timeout} ranks/5 tasks/dbench validater/valgrind} | 2 | |
Failure Reason:
"2023-10-26T06:30:26.090571+0000 mon.a (mon.0) 376 : cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log |
||||||||||||||
pass | 7438466 | 2023-10-26 05:21:58 | 2023-10-26 06:02:41 | 2023-10-26 08:06:13 | 2:03:32 | 1:51:48 | 0:11:44 | smithi | main | ubuntu | 22.04 | fs/volumes/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/volumes/{overrides test/basic}} | 2 | |
pass | 7438467 | 2023-10-26 05:21:58 | 2023-10-26 06:04:21 | 2023-10-26 06:48:33 | 0:44:12 | 0:35:58 | 0:08:14 | smithi | main | rhel | 8.6 | fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} ms_mode/crc wsync/yes} objectstore-ec/bluestore-ec-root omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{export-check n/5 replication/default} standby-replay tasks/{0-subvolume/{with-no-extra-options} 1-check-counter 2-scrub/no 3-snaps/yes 4-flush/yes 5-workunit/suites/fsstress}} | 3 | |
fail | 7438468 | 2023-10-26 05:21:59 | 2023-10-26 06:05:22 | 2023-10-26 06:36:57 | 0:31:35 | 0:22:23 | 0:09:12 | smithi | main | centos | 9.stream | fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{centos_latest} mount/fuse objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/journal-repair} | 2 | |
Failure Reason:
"2023-10-26T06:34:32.593318+0000 mds.b (mds.0) 1 : cluster [WRN] evicting unresponsive client smithi007:0 (7074), after 303.428 seconds" in cluster log |
||||||||||||||
fail | 7438469 | 2023-10-26 05:22:00 | 2023-10-26 06:06:02 | 2023-10-26 06:59:35 | 0:53:33 | 0:42:16 | 0:11:17 | smithi | main | rhel | 8.6 | fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/legacy wsync/no} objectstore-ec/bluestore-bitmap omap_limit/10 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{export-check n/3 replication/always} standby-replay tasks/{0-subvolume/{with-namespace-isolated} 1-check-counter 2-scrub/yes 3-snaps/yes 4-flush/no 5-workunit/kernel_untar_build}} | 3 | |
Failure Reason:
Command failed (workunit test kernel_untar_build.sh) on smithi050 with status 2: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=384993681abceb8cb8affea10538d607052db4b4 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/kernel_untar_build.sh' |
||||||||||||||
fail | 7438470 | 2023-10-26 05:22:01 | 2023-10-26 06:08:23 | 2023-10-26 06:47:34 | 0:39:11 | 0:28:17 | 0:10:54 | smithi | main | centos | 9.stream | fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{centos_latest} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/mds-full} | 2 | |
Failure Reason:
"2023-10-26T06:30:33.038056+0000 osd.0 (osd.0) 3 : cluster [WRN] OSD bench result of 69740.643613 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd]." in cluster log |
||||||||||||||
fail | 7438471 | 2023-10-26 05:22:02 | 2023-10-26 06:08:34 | 2023-10-26 07:03:14 | 0:54:40 | 0:44:10 | 0:10:30 | smithi | main | rhel | 8.6 | fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/legacy wsync/no} objectstore-ec/bluestore-bitmap omap_limit/10 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/1 standby-replay tasks/{0-subvolume/{with-quota} 1-check-counter 2-scrub/yes 3-snaps/yes 4-flush/no 5-workunit/suites/ffsb}} | 3 | |
Failure Reason:
Command failed (workunit test suites/ffsb.sh) on smithi089 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=384993681abceb8cb8affea10538d607052db4b4 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/ffsb.sh' |