User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Pass | Fail | Dead |
---|---|---|---|---|---|---|---|---|---|---|---|
pdonnell | 2023-03-24 23:11:30 | 2023-03-24 23:16:09 | 2023-03-25 13:50:45 | 14:34:36 | fs | wip-pdonnell-testing-20230323.162417 | smithi | c19e8f2 | 6 | 21 | 4 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
fail | 7219630 |
![]() ![]() |
2023-03-24 23:11:36 | 2023-03-24 23:16:09 | 2023-03-25 00:06:51 | 0:50:42 | 0:33:20 | 0:17:22 | smithi | parallel-gzip | ubuntu | 20.04 | fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{ubuntu_20.04} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} objectstore/bluestore-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/damage} | 2 |
Failure Reason:
Test failure: test_object_deletion (tasks.cephfs.test_damage.TestDamage) |
||||||||||||||
fail | 7219631 |
![]() |
2023-03-24 23:11:37 | 2023-03-24 23:19:26 | 2023-03-25 01:06:19 | 1:46:53 | 1:32:50 | 0:14:03 | smithi | parallel-gzip | rhel | 8.6 | fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/fuse objectstore-ec/bluestore-comp-ec-root omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{export-check n/3 replication/default} standby-replay tasks/{0-subvolume/{with-quota} 1-check-counter 2-scrub/yes 3-snaps/no 4-flush/no 5-workunit/kernel_untar_build}} | 3 |
Failure Reason:
error during scrub thrashing: reached maximum tries (30) after waiting for 900 seconds |
||||||||||||||
fail | 7219632 |
![]() ![]() |
2023-03-24 23:11:37 | 2023-03-24 23:27:31 | 2023-03-25 00:09:54 | 0:42:23 | 0:31:59 | 0:10:24 | smithi | parallel-gzip | centos | 8.stream | fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{centos_8} mount/fuse objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/data-scan} | 2 |
Failure Reason:
Test failure: test_rebuild_missing_zeroth (tasks.cephfs.test_data_scan.TestDataScan) |
||||||||||||||
dead | 7219633 |
![]() |
2023-03-24 23:11:38 | 2023-03-24 23:28:11 | 2023-03-25 11:50:27 | 12:22:16 | smithi | parallel-gzip | rhel | 8.6 | fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} ms_mode/legacy wsync/no} objectstore-ec/bluestore-ec-root omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{export-check n/3 replication/always} standby-replay tasks/{0-subvolume/{with-quota} 1-check-counter 2-scrub/yes 3-snaps/no 4-flush/no 5-workunit/suites/blogbench}} | 3 | ||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 7219634 |
![]() ![]() |
2023-03-24 23:11:39 | 2023-03-24 23:28:12 | 2023-03-24 23:52:22 | 0:24:10 | 0:14:15 | 0:09:55 | smithi | parallel-gzip | ubuntu | 22.04 | fs/multiclient/{begin/{0-install 1-ceph 2-logrotate} clusters/1-mds-2-client conf/{client mds mon osd} distros/ubuntu_latest mount/fuse objectstore-ec/bluestore-comp-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down} tasks/mdtest} | 4 |
Failure Reason:
Command failed on smithi179 with status 2: 'TESTDIR=/home/ubuntu/cephtest bash -s' |
||||||||||||||
pass | 7219635 |
![]() |
2023-03-24 23:11:40 | 2023-03-24 23:28:57 | 2023-03-25 00:02:25 | 0:33:28 | 0:21:20 | 0:12:08 | smithi | parallel-gzip | centos | 8.stream | fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/forward-scrub} | 2 |
fail | 7219636 |
![]() ![]() |
2023-03-24 23:11:41 | 2023-03-24 23:28:57 | 2023-03-25 00:05:02 | 0:36:05 | 0:26:51 | 0:09:14 | smithi | parallel-gzip | rhel | 8.6 | fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/fuse objectstore-ec/bluestore-comp-ec-root omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{export-check n/3 replication/always} standby-replay tasks/{0-subvolume/{with-no-extra-options} 1-check-counter 2-scrub/yes 3-snaps/no 4-flush/no 5-workunit/suites/pjd}} | 3 |
Failure Reason:
Command failed (workunit test suites/pjd.sh) on smithi123 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=1e7de2f05230c7419acf5ca7f5fd22964e30ab77 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/pjd.sh' |
||||||||||||||
dead | 7219637 |
![]() |
2023-03-24 23:11:42 | 2023-03-24 23:32:14 | 2023-03-25 12:00:39 | 12:28:25 | smithi | parallel-gzip | centos | 8.stream | fs/fscrypt/{begin/{0-install 1-ceph 2-logrotate} bluestore-bitmap clusters/1-mds-1-client conf/{client mds mon osd} distro/{centos_latest} mount/kclient/{mount-syntax/v1 mount overrides/{distro/testing/k-testing}} overrides/{ignorelist_health ignorelist_health_more ignorelist_wrongly_marked_down pg-warn} tasks/fscrypt-common} | 3 | ||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 7219638 |
![]() |
2023-03-24 23:11:42 | 2023-03-24 23:33:49 | 2023-03-25 00:07:30 | 0:33:41 | 0:25:39 | 0:08:02 | smithi | parallel-gzip | rhel | 8.6 | fs/mirror-ha/{begin/{0-install 1-ceph 2-logrotate} cephfs-mirror/three-per-cluster clients/{mirror} cluster/{1-node} objectstore/bluestore-bitmap overrides/{whitelist_health} supported-random-distro$/{rhel_8} workloads/cephfs-mirror-ha-workunit} | 1 |
Failure Reason:
reached maximum tries (50) after waiting for 300 seconds |
||||||||||||||
fail | 7219639 |
![]() |
2023-03-24 23:11:43 | 2023-03-24 23:33:50 | 2023-03-25 00:55:31 | 1:21:41 | 1:12:22 | 0:09:19 | smithi | parallel-gzip | rhel | 8.6 | fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} ms_mode/legacy wsync/no} objectstore-ec/bluestore-ec-root omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{export-check n/3 replication/default} standby-replay tasks/{0-subvolume/{with-no-extra-options} 1-check-counter 2-scrub/yes 3-snaps/yes 4-flush/no 5-workunit/kernel_untar_build}} | 3 |
Failure Reason:
error during scrub thrashing: rank damage found: {'backtrace'} |
||||||||||||||
fail | 7219640 |
![]() ![]() |
2023-03-24 23:11:44 | 2023-03-24 23:36:45 | 2023-03-25 00:27:17 | 0:50:32 | 0:33:32 | 0:17:00 | smithi | parallel-gzip | rhel | 8.6 | fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} ms_mode/legacy wsync/no} objectstore-ec/bluestore-ec-root omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{export-check n/3 replication/always} standby-replay tasks/{0-subvolume/{with-namespace-isolated} 1-check-counter 2-scrub/yes 3-snaps/yes 4-flush/no 5-workunit/suites/pjd}} | 3 |
Failure Reason:
Command failed (workunit test suites/pjd.sh) on smithi107 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=1e7de2f05230c7419acf5ca7f5fd22964e30ab77 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/pjd.sh' |
||||||||||||||
pass | 7219641 |
![]() |
2023-03-24 23:11:45 | 2023-03-24 23:48:11 | 2023-03-25 01:29:51 | 1:41:40 | 1:34:21 | 0:07:19 | smithi | parallel-gzip | rhel | 8.6 | fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/fuse objectstore-ec/bluestore-comp-ec-root omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{export-check n/5 replication/always} standby-replay tasks/{0-subvolume/{with-no-extra-options} 1-check-counter 2-scrub/yes 3-snaps/yes 4-flush/no 5-workunit/fs/misc}} | 3 |
pass | 7219642 |
![]() |
2023-03-24 23:11:46 | 2023-03-24 23:48:12 | 2023-03-25 00:26:08 | 0:37:56 | 0:23:30 | 0:14:26 | smithi | parallel-gzip | ubuntu | 22.04 | fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/scrub} | 2 |
fail | 7219643 |
![]() ![]() |
2023-03-24 23:11:47 | 2023-03-24 23:52:47 | 2023-03-25 00:26:46 | 0:33:59 | 0:14:07 | 0:19:52 | smithi | parallel-gzip | ubuntu | 22.04 | fs/multiclient/{begin/{0-install 1-ceph 2-logrotate} clusters/1-mds-2-client conf/{client mds mon osd} distros/ubuntu_latest mount/fuse objectstore-ec/bluestore-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down} tasks/ior-shared-file} | 4 |
Failure Reason:
Command failed on smithi179 with status 2: 'TESTDIR=/home/ubuntu/cephtest bash -s' |
||||||||||||||
pass | 7219644 |
![]() |
2023-03-24 23:11:47 | 2023-03-25 00:02:45 | 2023-03-25 00:36:00 | 0:33:15 | 0:26:27 | 0:06:48 | smithi | parallel-gzip | rhel | 8.6 | fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} ms_mode/legacy wsync/no} objectstore-ec/bluestore-ec-root omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{export-check n/3 replication/default} standby-replay tasks/{0-subvolume/{with-no-extra-options} 1-check-counter 2-scrub/no 3-snaps/no 4-flush/no 5-workunit/suites/fsstress}} | 3 |
fail | 7219645 |
![]() ![]() |
2023-03-24 23:11:48 | 2023-03-25 00:03:16 | 2023-03-25 00:36:07 | 0:32:51 | 0:25:09 | 0:07:42 | smithi | parallel-gzip | rhel | 8.6 | fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/fuse objectstore-ec/bluestore-comp-ec-root omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/1 standby-replay tasks/{0-subvolume/{with-no-extra-options} 1-check-counter 2-scrub/no 3-snaps/no 4-flush/no 5-workunit/suites/pjd}} | 3 |
Failure Reason:
Command failed (workunit test suites/pjd.sh) on smithi123 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=1e7de2f05230c7419acf5ca7f5fd22964e30ab77 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/pjd.sh' |
||||||||||||||
fail | 7219646 |
![]() ![]() |
2023-03-24 23:11:49 | 2023-03-25 00:05:21 | 2023-03-25 00:37:22 | 0:32:01 | 0:14:11 | 0:17:50 | smithi | parallel-gzip | ubuntu | 22.04 | fs/multiclient/{begin/{0-install 1-ceph 2-logrotate} clusters/1-mds-3-client conf/{client mds mon osd} distros/ubuntu_latest mount/fuse objectstore-ec/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down} tasks/mdtest} | 5 |
Failure Reason:
Command failed on smithi146 with status 2: 'TESTDIR=/home/ubuntu/cephtest bash -s' |
||||||||||||||
dead | 7219647 |
![]() |
2023-03-24 23:11:50 | 2023-03-25 00:10:18 | 2023-03-25 13:50:45 | 13:40:27 | smithi | parallel-gzip | ubuntu | 20.04 | fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{ubuntu_20.04} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} objectstore/bluestore-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/xfstests-dev} | 2 | ||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 7219648 |
![]() |
2023-03-24 23:11:51 | 2023-03-25 00:20:54 | 2023-03-25 00:41:54 | 0:21:00 | 0:15:36 | 0:05:24 | smithi | parallel-gzip | rhel | 8.6 | fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{rhel_8} mount/fuse msgr-failures/osd-mds-delay objectstore-ec/bluestore-bitmap overrides/{frag ignorelist_health ignorelist_wrongly_marked_down prefetch_dirfrags/yes prefetch_entire_dirfrags/no races session_timeout thrashosds-health} ranks/5 tasks/{1-thrash/mds 2-workunit/suites/iozone}} | 2 |
dead | 7219649 |
![]() |
2023-03-24 23:11:52 | 2023-03-25 00:20:54 | 2023-03-25 13:14:34 | 12:53:40 | smithi | parallel-gzip | rhel | 8.6 | fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} ms_mode/legacy wsync/no} objectstore-ec/bluestore-ec-root omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{export-check n/5 replication/always} standby-replay tasks/{0-subvolume/{with-namespace-isolated-and-quota} 1-check-counter 2-scrub/no 3-snaps/yes 4-flush/no 5-workunit/kernel_untar_build}} | 3 | ||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 7219650 |
![]() ![]() |
2023-03-24 23:11:52 | 2023-03-25 00:27:11 | 2023-03-25 01:06:45 | 0:39:34 | 0:33:11 | 0:06:23 | smithi | parallel-gzip | rhel | 8.6 | fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} ms_mode/legacy wsync/no} objectstore-ec/bluestore-ec-root omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/1 standby-replay tasks/{0-subvolume/{with-namespace-isolated} 1-check-counter 2-scrub/no 3-snaps/yes 4-flush/no 5-workunit/suites/pjd}} | 3 |
Failure Reason:
Command failed (workunit test suites/pjd.sh) on smithi057 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=1e7de2f05230c7419acf5ca7f5fd22964e30ab77 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/pjd.sh' |
||||||||||||||
fail | 7219651 |
![]() ![]() |
2023-03-24 23:11:53 | 2023-03-25 00:27:11 | 2023-03-25 01:25:09 | 0:57:58 | 0:47:16 | 0:10:42 | smithi | parallel-gzip | ubuntu | 22.04 | fs/snaps/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/fuse objectstore-ec/bluestore-comp-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down} tasks/workunit/snaps} | 2 |
Failure Reason:
Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi132 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=1e7de2f05230c7419acf5ca7f5fd22964e30ab77 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/fs/snaps/snaptest-multiple-capsnaps.sh' |
||||||||||||||
fail | 7219652 |
![]() ![]() |
2023-03-24 23:11:54 | 2023-03-25 00:27:22 | 2023-03-25 00:48:02 | 0:20:40 | 0:12:51 | 0:07:49 | smithi | parallel-gzip | rhel | 8.6 | fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} ms_mode/legacy wsync/no} objectstore-ec/bluestore-ec-root omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{export-check n/3 replication/default} standby-replay tasks/{0-subvolume/{with-namespace-isolated-and-quota} 1-check-counter 2-scrub/no 3-snaps/yes 4-flush/no 5-workunit/postgres}} | 3 |
Failure Reason:
Command failed on smithi099 with status 5: 'sudo systemctl stop ceph-3361c7f2-caa6-11ed-9afe-001a4aab830c@mon.b' |
||||||||||||||
pass | 7219653 |
![]() |
2023-03-24 23:11:55 | 2023-03-25 00:29:16 | 2023-03-25 02:50:58 | 2:21:42 | 2:08:35 | 0:13:07 | smithi | parallel-gzip | ubuntu | 20.04 | fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu_20.04} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-bitmap overrides/{frag ignorelist_health ignorelist_wrongly_marked_down prefetch_dirfrags/yes prefetch_entire_dirfrags/no races session_timeout thrashosds-health} ranks/3 tasks/{1-thrash/mds 2-workunit/fs/snaps}} | 2 |
fail | 7219654 |
![]() ![]() |
2023-03-24 23:11:56 | 2023-03-25 01:13:27 | 1712 | smithi | parallel-gzip | ubuntu | 22.04 | fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/fuse objectstore/bluestore-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/client-recovery} | 2 | |||
Failure Reason:
Test failure: test_reconnect_after_blocklisted (tasks.cephfs.test_client_recovery.TestClientRecovery) |
||||||||||||||
fail | 7219655 |
![]() |
2023-03-24 23:11:57 | 2023-03-25 00:36:18 | 2023-03-25 02:39:25 | 2:03:07 | 1:56:41 | 0:06:26 | smithi | parallel-gzip | rhel | 8.6 | fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/fuse objectstore-ec/bluestore-comp-ec-root omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{export-check n/5 replication/always} standby-replay tasks/{0-subvolume/{with-namespace-isolated} 1-check-counter 2-scrub/yes 3-snaps/no 4-flush/no 5-workunit/kernel_untar_build}} | 3 |
Failure Reason:
error during scrub thrashing: reached maximum tries (30) after waiting for 900 seconds |
||||||||||||||
fail | 7219656 |
![]() ![]() |
2023-03-24 23:11:57 | 2023-03-25 00:36:18 | 2023-03-25 01:26:05 | 0:49:47 | 0:36:34 | 0:13:13 | smithi | parallel-gzip | ubuntu | 20.04 | fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{ubuntu_20.04} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} objectstore/bluestore-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/data-scan} | 2 |
Failure Reason:
Test failure: test_rebuild_missing_zeroth (tasks.cephfs.test_data_scan.TestDataScan) |
||||||||||||||
fail | 7219657 |
![]() ![]() |
2023-03-24 23:11:58 | 2023-03-25 00:36:19 | 2023-03-25 01:04:38 | 0:28:19 | 0:14:25 | 0:13:54 | smithi | parallel-gzip | ubuntu | 22.04 | fs/multiclient/{begin/{0-install 1-ceph 2-logrotate} clusters/1-mds-3-client conf/{client mds mon osd} distros/ubuntu_latest mount/fuse objectstore-ec/bluestore-comp overrides/{ignorelist_health ignorelist_wrongly_marked_down} tasks/ior-shared-file} | 5 |
Failure Reason:
Command failed on smithi146 with status 2: 'TESTDIR=/home/ubuntu/cephtest bash -s' |
||||||||||||||
fail | 7219658 |
![]() ![]() |
2023-03-24 23:11:59 | 2023-03-25 00:37:45 | 2023-03-25 01:05:03 | 0:27:18 | 0:16:21 | 0:10:57 | smithi | parallel-gzip | rhel | 8.6 | fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} objectstore/bluestore-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/forward-scrub} | 2 |
Failure Reason:
Test failure: test_orphan_scan (tasks.cephfs.test_forward_scrub.TestForwardScrub) |
||||||||||||||
fail | 7219659 |
![]() ![]() |
2023-03-24 23:12:00 | 2023-03-25 00:48:26 | 2023-03-25 01:21:14 | 0:32:48 | 0:25:45 | 0:07:03 | smithi | parallel-gzip | rhel | 8.6 | fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/fuse objectstore-ec/bluestore-comp-ec-root omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/1 standby-replay tasks/{0-subvolume/{with-quota} 1-check-counter 2-scrub/yes 3-snaps/no 4-flush/no 5-workunit/suites/pjd}} | 3 |
Failure Reason:
Command failed (workunit test suites/pjd.sh) on smithi073 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=1e7de2f05230c7419acf5ca7f5fd22964e30ab77 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/pjd.sh' |
||||||||||||||
fail | 7219660 |
![]() |
2023-03-24 23:12:01 | 2023-03-25 00:48:26 | 2023-03-25 02:06:23 | 1:17:57 | 1:04:33 | 0:13:24 | smithi | parallel-gzip | rhel | 8.6 | fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/fuse objectstore-ec/bluestore-comp-ec-root omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{export-check n/3 replication/default} standby-replay tasks/{0-subvolume/{with-namespace-isolated-and-quota} 1-check-counter 2-scrub/yes 3-snaps/no 4-flush/no 5-workunit/postgres}} | 3 |
Failure Reason:
error during scrub thrashing: reached maximum tries (30) after waiting for 900 seconds |