User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Pass | Fail | Dead |
---|---|---|---|---|---|---|---|---|---|---|---|
rishabh | 2023-06-29 12:43:28 | 2023-06-29 12:43:39 | 2023-06-29 19:46:04 | 7:02:25 | fs | wip-rishabh-rename | smithi | 4c403ab | 9 | 27 | 13 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
fail | 7321313 | 2023-06-29 12:43:35 | 2023-06-29 12:43:36 | 2023-06-29 19:46:04 | 7:02:28 | 6:46:42 | 0:15:46 | smithi | main | rhel | 8.6 | fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/fuse objectstore-ec/bluestore-comp omap_limit/10 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/1 standby-replay tasks/{0-subvolume/{with-quota} 1-check-counter 2-scrub/no 3-snaps/yes 4-flush/no 5-workunit/suites/pjd}} | 3 | |
Failure Reason:
Command failed (workunit test suites/pjd.sh) on smithi060 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=4c403ab5c815eaf908a3d8b31cb0ce62cdd9db36 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/pjd.sh' |
||||||||||||||
pass | 7321314 | 2023-06-29 12:43:36 | 2023-06-29 12:43:37 | 2023-06-29 13:38:28 | 0:54:51 | 0:38:26 | 0:16:25 | smithi | main | centos | 8.stream | fs/upgrade/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} fail_fs/no overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn syntax} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-inline/yes 4-verify} 2-client 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} | 2 | |
fail | 7321315 | 2023-06-29 12:43:36 | 2023-06-29 12:43:37 | 2023-06-29 16:12:45 | 3:29:08 | 3:16:22 | 0:12:46 | smithi | main | ubuntu | 20.04 | fs/libcephfs/{begin/{0-install 1-ceph 2-logrotate} clusters/1-mds-1-client-coloc conf/{client mds mon osd} distro/{ubuntu_20.04} objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/client} | 2 | |
Failure Reason:
Command failed (workunit test client/test.sh) on smithi008 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=4c403ab5c815eaf908a3d8b31cb0ce62cdd9db36 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/client/test.sh' |
||||||||||||||
fail | 7321316 | 2023-06-29 12:43:37 | 2023-06-29 13:52:52 | 3474 | smithi | main | rhel | 8.6 | fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/secure wsync/yes} objectstore-ec/bluestore-comp-ec-root omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{export-check n/5 replication/always} standby-replay tasks/{0-subvolume/{with-quota} 1-check-counter 2-scrub/yes 3-snaps/no 4-flush/yes 5-workunit/kernel_untar_build}} | 3 | ||||
Failure Reason:
error during scrub thrashing: rank damage found: {'backtrace'} |
||||||||||||||
fail | 7321317 | 2023-06-29 12:43:38 | 2023-06-29 12:43:38 | 2023-06-29 14:57:11 | 2:13:33 | 2:00:29 | 0:13:04 | smithi | main | ubuntu | 20.04 | fs/verify/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu/{latest overrides}} mount/kclient/{k-testing mount ms-die-on-skipped} objectstore-ec/bluestore-comp-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down mon-debug session_timeout} ranks/3 tasks/dbench validater/valgrind} | 2 | |
Failure Reason:
saw valgrind issues |
||||||||||||||
pass | 7321318 | 2023-06-29 12:43:39 | 2023-06-29 12:43:39 | 2023-06-29 13:29:27 | 0:45:48 | 0:33:51 | 0:11:57 | smithi | main | rhel | 8.6 | fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/crc wsync/yes} objectstore-ec/bluestore-ec-root omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{export-check n/3 replication/default} standby-replay tasks/{0-subvolume/{with-namespace-isolated} 1-check-counter 2-scrub/yes 3-snaps/no 4-flush/yes 5-workunit/suites/fsstress}} | 3 | |
fail | 7321319 | 2023-06-29 12:43:39 | 2023-06-29 12:43:39 | 2023-06-29 19:29:53 | 6:46:14 | 6:35:29 | 0:10:45 | smithi | main | rhel | 8.6 | fs/32bits/{begin/{0-install 1-ceph 2-logrotate} clusters/fixed-2-ucephfs conf/{client mds mon osd} distro/{rhel_8} mount/fuse objectstore-ec/bluestore-comp-ec-root overrides/{faked-ino ignorelist_health ignorelist_wrongly_marked_down} tasks/cfuse_workunit_suites_pjd} | 2 | |
Failure Reason:
Command failed (workunit test suites/pjd.sh) on smithi178 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=4c403ab5c815eaf908a3d8b31cb0ce62cdd9db36 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/pjd.sh' |
||||||||||||||
fail | 7321320 | 2023-06-29 12:43:40 | 2023-06-29 12:43:40 | 2023-06-29 19:34:09 | 6:50:29 | 6:34:26 | 0:16:03 | smithi | main | rhel | 8.6 | fs/permission/{begin/{0-install 1-ceph 2-logrotate} clusters/fixed-2-ucephfs conf/{client mds mon osd} distro/{rhel_8} mount/fuse objectstore-ec/bluestore-comp-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down} tasks/cfuse_workunit_suites_pjd} | 2 | |
Failure Reason:
Command failed (workunit test suites/pjd.sh) on smithi064 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=4c403ab5c815eaf908a3d8b31cb0ce62cdd9db36 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/pjd.sh' |
||||||||||||||
dead | 7321321 | 2023-06-29 12:43:41 | 2023-06-29 12:43:41 | 2023-06-29 12:50:14 | 0:06:33 | smithi | main | rhel | 8.6 | fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{rhel_8} mount/fuse msgr-failures/none objectstore-ec/bluestore-comp overrides/{frag ignorelist_health ignorelist_wrongly_marked_down prefetch_dirfrags/no prefetch_entire_dirfrags/yes races session_timeout thrashosds-health} ranks/5 tasks/{1-thrash/mds 2-workunit/fs/snaps}} | 2 | |||
Failure Reason:
Error reimaging machines: [Errno 104] Connection reset by peer |
||||||||||||||
pass | 7321322 | 2023-06-29 12:43:42 | 2023-06-29 12:43:42 | 2023-06-29 13:49:18 | 1:05:36 | 0:49:43 | 0:15:53 | smithi | main | rhel | 8.6 | fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/crc wsync/yes} objectstore-ec/bluestore-ec-root omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{export-check n/5 replication/always} standby-replay tasks/{0-subvolume/{with-namespace-isolated} 1-check-counter 2-scrub/yes 3-snaps/no 4-flush/yes 5-workunit/suites/iogen}} | 3 | |
dead | 7321323 | 2023-06-29 12:43:43 | 2023-06-29 12:43:43 | 2023-06-29 12:50:40 | 0:06:57 | smithi | main | ubuntu | 22.04 | fs/multiclient/{begin/{0-install 1-ceph 2-logrotate} clusters/1-mds-2-client conf/{client mds mon osd} distros/ubuntu_latest mount/fuse objectstore-ec/bluestore-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down} tasks/ior-shared-file} | 4 | |||
Failure Reason:
Error reimaging machines: [Errno 104] Connection reset by peer |
||||||||||||||
pass | 7321324 | 2023-06-29 12:43:43 | 2023-06-29 12:43:43 | 2023-06-29 13:41:21 | 0:57:38 | 0:42:58 | 0:14:40 | smithi | main | rhel | 8.6 | fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/fuse objectstore-ec/bluestore-comp omap_limit/10 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{export-check n/3 replication/always} standby-replay tasks/{0-subvolume/{with-namespace-isolated-and-quota} 1-check-counter 2-scrub/yes 3-snaps/no 4-flush/no 5-workunit/direct_io}} | 3 | |
fail | 7321325 | 2023-06-29 12:43:44 | 2023-06-29 12:44:04 | 2023-06-29 16:27:02 | 3:42:58 | 3:24:45 | 0:18:13 | smithi | main | ubuntu | 20.04 | fs/traceless/{begin/{0-install 1-ceph 2-logrotate} clusters/fixed-2-ucephfs conf/{client mds mon osd} distro/{ubuntu_20.04} mount/fuse objectstore-ec/bluestore-comp overrides/{frag ignorelist_health ignorelist_wrongly_marked_down} tasks/cfuse_workunit_suites_dbench traceless/50pc} | 2 | |
Failure Reason:
Command failed (workunit test suites/dbench.sh) on smithi033 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=4c403ab5c815eaf908a3d8b31cb0ce62cdd9db36 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/dbench.sh' |
||||||||||||||
fail | 7321326 | 2023-06-29 12:43:45 | 2023-06-29 12:45:35 | 2023-06-29 13:18:02 | 0:32:27 | 0:17:58 | 0:14:29 | smithi | main | rhel | 8.6 | fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/crc wsync/yes} objectstore-ec/bluestore-ec-root omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{export-check n/3 replication/always} standby-replay tasks/{0-subvolume/{with-no-extra-options} 1-check-counter 2-scrub/no 3-snaps/yes 4-flush/yes 5-workunit/suites/dbench}} | 3 | |
Failure Reason:
Command failed on smithi052 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:4c403ab5c815eaf908a3d8b31cb0ce62cdd9db36 pull' |
||||||||||||||
fail | 7321327 | 2023-06-29 12:43:46 | 2023-06-29 12:45:45 | 2023-06-29 13:19:19 | 0:33:34 | 0:16:42 | 0:16:52 | smithi | main | ubuntu | 22.04 | fs/multiclient/{begin/{0-install 1-ceph 2-logrotate} clusters/1-mds-3-client conf/{client mds mon osd} distros/ubuntu_latest mount/fuse objectstore-ec/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down} tasks/mdtest} | 5 | |
Failure Reason:
Command failed on smithi144 with status 2: 'TESTDIR=/home/ubuntu/cephtest bash -s' |
||||||||||||||
fail | 7321328 | 2023-06-29 12:43:46 | 2023-06-29 12:46:06 | 2023-06-29 19:29:09 | 6:43:03 | 6:25:27 | 0:17:36 | smithi | main | ubuntu | 20.04 | fs/32bits/{begin/{0-install 1-ceph 2-logrotate} clusters/fixed-2-ucephfs conf/{client mds mon osd} distro/{ubuntu_20.04} mount/fuse objectstore-ec/bluestore-ec-root overrides/{faked-ino ignorelist_health ignorelist_wrongly_marked_down} tasks/cfuse_workunit_suites_pjd} | 2 | |
Failure Reason:
Command failed (workunit test suites/pjd.sh) on smithi090 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=4c403ab5c815eaf908a3d8b31cb0ce62cdd9db36 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/pjd.sh' |
||||||||||||||
fail | 7321329 | 2023-06-29 12:43:47 | 2023-06-29 12:46:26 | 2023-06-29 19:29:48 | 6:43:22 | 6:27:31 | 0:15:51 | smithi | main | centos | 8.stream | fs/permission/{begin/{0-install 1-ceph 2-logrotate} clusters/fixed-2-ucephfs conf/{client mds mon osd} distro/{centos_8} mount/fuse objectstore-ec/bluestore-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down} tasks/cfuse_workunit_suites_pjd} | 2 | |
Failure Reason:
Command failed (workunit test suites/pjd.sh) on smithi088 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=4c403ab5c815eaf908a3d8b31cb0ce62cdd9db36 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/pjd.sh' |
||||||||||||||
fail | 7321330 | 2023-06-29 12:43:48 | 2023-06-29 12:46:26 | 2023-06-29 15:29:58 | 2:43:32 | 2:29:25 | 0:14:07 | smithi | main | centos | 8.stream | fs/upgrade/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} fail_fs/no overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn syntax} roles tasks/{0-from/pacific 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-inline/yes 4-verify} 2-client 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} | 2 | |
Failure Reason:
Command failed on smithi084 with status 1: "sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:pacific shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 5f17ceb2-167f-11ee-9b2e-001a4aab830c -e sha1=4c403ab5c815eaf908a3d8b31cb0ce62cdd9db36 -- bash -c 'ceph orch ps'" |
||||||||||||||
dead | 7321331 | 2023-06-29 12:43:49 | 2023-06-29 12:46:57 | 2023-06-29 12:53:16 | 0:06:19 | smithi | main | rhel | 8.6 | fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{rhel_8} mount/fuse msgr-failures/none objectstore-ec/bluestore-bitmap overrides/{frag ignorelist_health ignorelist_wrongly_marked_down prefetch_dirfrags/yes prefetch_entire_dirfrags/yes races session_timeout thrashosds-health} ranks/5 tasks/{1-thrash/osd 2-workunit/fs/snaps}} | 2 | |||
Failure Reason:
Error reimaging machines: [Errno 104] Connection reset by peer |
||||||||||||||
fail | 7321332 | 2023-06-29 12:43:49 | 2023-06-29 12:47:27 | 2023-06-29 15:00:09 | 2:12:42 | 1:59:26 | 0:13:16 | smithi | main | rhel | 8.6 | fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/fuse objectstore-ec/bluestore-comp omap_limit/10 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{export-check n/5 replication/default} standby-replay tasks/{0-subvolume/{with-no-extra-options} 1-check-counter 2-scrub/yes 3-snaps/no 4-flush/no 5-workunit/fs/misc}} | 3 | |
Failure Reason:
error during scrub thrashing: reached maximum tries (31) after waiting for 900 seconds |
||||||||||||||
pass | 7321333 | 2023-06-29 12:43:50 | 2023-06-29 12:47:38 | 2023-06-29 14:27:21 | 1:39:43 | 1:26:16 | 0:13:27 | smithi | main | rhel | 8.6 | fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/crc wsync/yes} objectstore-ec/bluestore-ec-root omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{export-check n/5 replication/always} standby-replay tasks/{0-subvolume/{with-no-extra-options} 1-check-counter 2-scrub/no 3-snaps/yes 4-flush/yes 5-workunit/kernel_untar_build}} | 3 | |
fail | 7321334 | 2023-06-29 12:43:51 | 2023-06-29 12:47:38 | 2023-06-29 15:05:46 | 2:18:08 | 2:02:29 | 0:15:39 | smithi | main | ubuntu | 20.04 | fs/verify/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu/{latest overrides}} mount/kclient/{k-testing mount ms-die-on-skipped} objectstore-ec/bluestore-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down mon-debug session_timeout} ranks/1 tasks/dbench validater/valgrind} | 2 | |
Failure Reason:
saw valgrind issues |
||||||||||||||
dead | 7321335 | 2023-06-29 12:43:51 | 2023-06-29 12:47:39 | 2023-06-29 12:54:42 | 0:07:03 | smithi | main | centos | 8.stream | fs/upgrade/featureful_client/old_client/{bluestore-bitmap centos_latest clusters/1-mds-2-client-micro conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down multimds/no pg-warn} tasks/{0-octopus 1-client 2-upgrade 3-compat_client/quincy}} | 3 | |||
Failure Reason:
Error reimaging machines: [Errno 104] Connection reset by peer |
||||||||||||||
fail | 7321336 | 2023-06-29 12:43:52 | 2023-06-29 12:48:09 | 2023-06-29 13:41:59 | 0:53:50 | 0:44:29 | 0:09:21 | smithi | main | rhel | 8.6 | fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} ms_mode/legacy wsync/no} objectstore-ec/bluestore-bitmap omap_limit/10 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{export-check n/3 replication/default} standby-replay tasks/{0-subvolume/{no-subvolume} 1-check-counter 2-scrub/yes 3-snaps/no 4-flush/no 5-workunit/postgres}} | 3 | |
Failure Reason:
error during scrub thrashing: reached maximum tries (31) after waiting for 900 seconds |
||||||||||||||
fail | 7321337 | 2023-06-29 12:43:53 | 2023-06-29 12:49:10 | 2023-06-29 16:28:43 | 3:39:33 | 3:24:03 | 0:15:30 | smithi | main | ubuntu | 22.04 | fs/traceless/{begin/{0-install 1-ceph 2-logrotate} clusters/fixed-2-ucephfs conf/{client mds mon osd} distro/{ubuntu_latest} mount/fuse objectstore-ec/bluestore-ec-root overrides/{frag ignorelist_health ignorelist_wrongly_marked_down} tasks/cfuse_workunit_suites_ffsb traceless/50pc} | 2 | |
Failure Reason:
Command failed (workunit test suites/ffsb.sh) on smithi067 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=4c403ab5c815eaf908a3d8b31cb0ce62cdd9db36 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/ffsb.sh' |
||||||||||||||
fail | 7321338 | 2023-06-29 12:43:54 | 2023-06-29 12:49:40 | 2023-06-29 13:26:04 | 0:36:24 | 0:22:41 | 0:13:43 | smithi | main | centos | 8.stream | fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{centos_8} mount/fuse objectstore/bluestore-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/pool-perm} | 2 | |
Failure Reason:
Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm) |
||||||||||||||
fail | 7321339 | 2023-06-29 12:43:54 | 2023-06-29 12:49:41 | 2023-06-29 13:57:02 | 1:07:21 | 0:53:24 | 0:13:57 | smithi | main | rhel | 8.6 | fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/crc wsync/yes} objectstore-ec/bluestore-ec-root omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{export-check n/5 replication/default} standby-replay tasks/{0-subvolume/{no-subvolume} 1-check-counter 2-scrub/no 3-snaps/yes 4-flush/yes 5-workunit/suites/ffsb}} | 3 | |
Failure Reason:
Command failed (workunit test suites/ffsb.sh) on smithi134 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=4c403ab5c815eaf908a3d8b31cb0ce62cdd9db36 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/ffsb.sh' |
||||||||||||||
dead | 7321340 | 2023-06-29 12:43:55 | 2023-06-29 12:50:01 | 2023-06-29 12:55:56 | 0:05:55 | smithi | main | rhel | 8.6 | fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{ubuntu_20.04} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/quota} | 2 | |||
Failure Reason:
Error reimaging machines: [Errno 104] Connection reset by peer |
||||||||||||||
fail | 7321341 | 2023-06-29 12:43:56 | 2023-06-29 12:50:01 | 2023-06-29 19:41:18 | 6:51:17 | 6:37:51 | 0:13:26 | smithi | main | rhel | 8.6 | fs/32bits/{begin/{0-install 1-ceph 2-logrotate} clusters/fixed-2-ucephfs conf/{client mds mon osd} distro/{rhel_8} mount/fuse objectstore-ec/bluestore-bitmap overrides/{faked-ino ignorelist_health ignorelist_wrongly_marked_down} tasks/cfuse_workunit_suites_pjd} | 2 | |
Failure Reason:
Command failed (workunit test suites/pjd.sh) on smithi132 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=4c403ab5c815eaf908a3d8b31cb0ce62cdd9db36 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/pjd.sh' |
||||||||||||||
fail | 7321342 | 2023-06-29 12:43:57 | 2023-06-29 12:50:32 | 2023-06-29 19:31:18 | 6:40:46 | 6:28:19 | 0:12:27 | smithi | main | rhel | 8.6 | fs/permission/{begin/{0-install 1-ceph 2-logrotate} clusters/fixed-2-ucephfs conf/{client mds mon osd} distro/{rhel_8} mount/fuse objectstore-ec/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down} tasks/cfuse_workunit_suites_pjd} | 2 | |
Failure Reason:
Command failed (workunit test suites/pjd.sh) on smithi005 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=4c403ab5c815eaf908a3d8b31cb0ce62cdd9db36 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/pjd.sh' |
||||||||||||||
fail | 7321343 | 2023-06-29 12:43:57 | 2023-06-29 12:50:52 | 2023-06-29 13:47:37 | 0:56:45 | 0:37:31 | 0:19:14 | smithi | main | centos | 8.stream | fs/snaps/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-1c-client conf/{client mds mon osd} distro/{centos_8} mount/fuse objectstore-ec/bluestore-comp-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down} tasks/workunit/snaps} | 2 | |
Failure Reason:
Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi050 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=4c403ab5c815eaf908a3d8b31cb0ce62cdd9db36 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/fs/snaps/snaptest-multiple-capsnaps.sh' |
||||||||||||||
dead | 7321344 | 2023-06-29 12:43:58 | 2023-06-29 12:51:03 | 2023-06-29 12:57:10 | 0:06:07 | smithi | main | rhel | 8.6 | fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/fuse objectstore-ec/bluestore-comp omap_limit/10 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/1 standby-replay tasks/{0-subvolume/{with-namespace-isolated-and-quota} 1-check-counter 2-scrub/yes 3-snaps/no 4-flush/no 5-workunit/suites/fsx}} | 3 | |||
Failure Reason:
Error reimaging machines: [Errno 104] Connection reset by peer |
||||||||||||||
pass | 7321345 | 2023-06-29 12:43:59 | 2023-06-29 12:52:43 | 2023-06-29 13:47:59 | 0:55:16 | 0:43:37 | 0:11:39 | smithi | main | rhel | 8.6 | fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/crc wsync/yes} objectstore-ec/bluestore-ec-root omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/1 standby-replay tasks/{0-subvolume/{with-no-extra-options} 1-check-counter 2-scrub/no 3-snaps/yes 4-flush/yes 5-workunit/suites/pjd}} | 3 | |
fail | 7321346 | 2023-06-29 12:43:59 | 2023-06-29 12:52:54 | 2023-06-29 13:26:00 | 0:33:06 | 0:21:23 | 0:11:43 | smithi | main | rhel | 8.6 | fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/fuse objectstore-ec/bluestore-comp omap_limit/10 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{export-check n/5 replication/always} standby-replay tasks/{0-subvolume/{with-namespace-isolated-and-quota} 1-check-counter 2-scrub/yes 3-snaps/yes 4-flush/no 5-workunit/kernel_untar_build}} | 3 | |
Failure Reason:
Command failed on smithi055 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:4c403ab5c815eaf908a3d8b31cb0ce62cdd9db36 pull' |
||||||||||||||
fail | 7321347 | 2023-06-29 12:44:00 | 2023-06-29 12:53:04 | 2023-06-29 19:34:56 | 6:41:52 | 6:26:42 | 0:15:10 | smithi | main | centos | 8.stream | fs/traceless/{begin/{0-install 1-ceph 2-logrotate} clusters/fixed-2-ucephfs conf/{client mds mon osd} distro/{centos_8} mount/fuse objectstore-ec/bluestore-bitmap overrides/{frag ignorelist_health ignorelist_wrongly_marked_down} tasks/cfuse_workunit_suites_fsstress traceless/50pc} | 2 | |
Failure Reason:
Command failed (workunit test suites/fsstress.sh) on smithi002 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=4c403ab5c815eaf908a3d8b31cb0ce62cdd9db36 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/fsstress.sh' |
||||||||||||||
fail | 7321348 | 2023-06-29 12:44:01 | 2023-06-29 12:53:35 | 2023-06-29 13:22:22 | 0:28:47 | 0:15:40 | 0:13:07 | smithi | main | ubuntu | 22.04 | fs/multiclient/{begin/{0-install 1-ceph 2-logrotate} clusters/1-mds-3-client conf/{client mds mon osd} distros/ubuntu_latest mount/fuse objectstore-ec/bluestore-comp overrides/{ignorelist_health ignorelist_wrongly_marked_down} tasks/ior-shared-file} | 5 | |
Failure Reason:
Command failed on smithi156 with status 2: 'TESTDIR=/home/ubuntu/cephtest bash -s' |
||||||||||||||
fail | 7321349 | 2023-06-29 12:44:02 | 2023-06-29 12:54:35 | 2023-06-29 19:29:07 | 6:34:32 | 6:23:51 | 0:10:41 | smithi | main | ubuntu | 22.04 | fs/32bits/{begin/{0-install 1-ceph 2-logrotate} clusters/fixed-2-ucephfs conf/{client mds mon osd} distro/{ubuntu_latest} mount/fuse objectstore-ec/bluestore-comp overrides/{faked-ino ignorelist_health ignorelist_wrongly_marked_down} tasks/cfuse_workunit_suites_pjd} | 2 | |
Failure Reason:
Command failed (workunit test suites/pjd.sh) on smithi046 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=4c403ab5c815eaf908a3d8b31cb0ce62cdd9db36 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/pjd.sh' |
||||||||||||||
dead | 7321350 | 2023-06-29 12:44:02 | 2023-06-29 12:54:56 | 2023-06-29 13:01:05 | 0:06:09 | smithi | main | centos | 8.stream | fs/libcephfs/{begin/{0-install 1-ceph 2-logrotate} clusters/1-mds-1-client-coloc conf/{client mds mon osd} distro/{centos_8} objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/libcephfs_python} | 2 | |||
Failure Reason:
Error reimaging machines: [Errno 104] Connection reset by peer |
||||||||||||||
dead | 7321351 | 2023-06-29 12:44:03 | 2023-06-29 12:56:06 | 2023-06-29 13:01:38 | 0:05:32 | smithi | main | rhel | 8.6 | fs/permission/{begin/{0-install 1-ceph 2-logrotate} clusters/fixed-2-ucephfs conf/{client mds mon osd} distro/{rhel_8} mount/fuse objectstore-ec/bluestore-comp overrides/{ignorelist_health ignorelist_wrongly_marked_down} tasks/cfuse_workunit_suites_pjd} | 2 | |||
Failure Reason:
Error reimaging machines: [Errno 104] Connection reset by peer |
||||||||||||||
fail | 7321352 | 2023-06-29 12:44:04 | 2023-06-29 12:56:57 | 2023-06-29 13:20:19 | 0:23:22 | 0:16:57 | 0:06:25 | smithi | main | rhel | 8.6 | fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/workunit/quota} | 2 | |
Failure Reason:
Command failed (workunit test fs/quota/quota.sh) on smithi123 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.3/client.3/tmp && cd -- /home/ubuntu/cephtest/mnt.3/client.3/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=4c403ab5c815eaf908a3d8b31cb0ce62cdd9db36 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="3" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.3 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.3 CEPH_MNT=/home/ubuntu/cephtest/mnt.3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.3/qa/workunits/fs/quota/quota.sh' |
||||||||||||||
dead | 7321353 | 2023-06-29 12:44:04 | 2023-06-29 12:57:28 | 2023-06-29 13:05:25 | 0:07:57 | smithi | main | rhel | 8.6 | fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/fuse objectstore-ec/bluestore-comp omap_limit/10 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/1 standby-replay tasks/{0-subvolume/{no-subvolume} 1-check-counter 2-scrub/yes 3-snaps/yes 4-flush/no 5-workunit/suites/pjd}} | 3 | |||
Failure Reason:
Error reimaging machines: [Errno 104] Connection reset by peer |
||||||||||||||
fail | 7321354 | 2023-06-29 12:44:05 | 2023-06-29 13:00:59 | 2023-06-29 13:29:31 | 0:28:32 | 0:15:56 | 0:12:36 | smithi | main | ubuntu | 22.04 | fs/multiclient/{begin/{0-install 1-ceph 2-logrotate} clusters/1-mds-2-client conf/{client mds mon osd} distros/ubuntu_latest mount/fuse objectstore-ec/bluestore-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down} tasks/mdtest} | 4 | |
Failure Reason:
Command failed on smithi102 with status 2: 'TESTDIR=/home/ubuntu/cephtest bash -s' |
||||||||||||||
pass | 7321355 | 2023-06-29 12:44:06 | 2023-06-29 13:01:59 | 2023-06-29 13:44:44 | 0:42:45 | 0:30:32 | 0:12:13 | smithi | main | rhel | 8.6 | fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/secure wsync/yes} objectstore-ec/bluestore-comp-ec-root omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{export-check n/3 replication/always} standby-replay tasks/{0-subvolume/{no-subvolume} 1-check-counter 2-scrub/no 3-snaps/no 4-flush/yes 5-workunit/suites/fsync-tester}} | 3 | |
dead | 7321356 | 2023-06-29 12:44:07 | 2023-06-29 13:05:40 | 2023-06-29 13:23:45 | 0:18:05 | smithi | main | rhel | 8.6 | fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/fuse objectstore-ec/bluestore-comp omap_limit/10 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{export-check n/5 replication/default} standby-replay tasks/{0-subvolume/{with-namespace-isolated-and-quota} 1-check-counter 2-scrub/yes 3-snaps/yes 4-flush/no 5-workunit/fs/test_o_trunc}} | 3 | |||
Failure Reason:
Error reimaging machines: [Errno 104] Connection reset by peer |
||||||||||||||
pass | 7321357 | 2023-06-29 12:44:07 | 2023-06-29 13:18:13 | 2023-06-29 14:00:28 | 0:42:15 | 0:29:44 | 0:12:31 | smithi | main | centos | 8.stream | fs/upgrade/featureful_client/old_client/{bluestore-bitmap centos_latest clusters/1-mds-2-client-micro conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down multimds/no pg-warn} tasks/{0-octopus 1-client 2-upgrade 3-compat_client/quincy}} | 3 | |
dead | 7321358 | 2023-06-29 12:44:08 | 2023-06-29 13:19:23 | 2023-06-29 13:23:49 | 0:04:26 | smithi | main | rhel | 8.6 | fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/crc wsync/yes} objectstore-ec/bluestore-ec-root omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{export-check n/5 replication/always} standby-replay tasks/{0-subvolume/{no-subvolume} 1-check-counter 2-scrub/no 3-snaps/no 4-flush/yes 5-workunit/suites/iogen}} | 3 | |||
Failure Reason:
Error reimaging machines: [Errno 104] Connection reset by peer |
||||||||||||||
dead | 7321359 | 2023-06-29 12:44:09 | 2023-06-29 13:19:24 | 2023-06-29 13:24:48 | 0:05:24 | smithi | main | rhel | 8.6 | fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} ms_mode/legacy wsync/no} objectstore-ec/bluestore-bitmap omap_limit/10 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{export-check n/3 replication/default} standby-replay tasks/{0-subvolume/{with-namespace-isolated-and-quota} 1-check-counter 2-scrub/yes 3-snaps/yes 4-flush/no 5-workunit/suites/iozone}} | 3 | |||
Failure Reason:
Error reimaging machines: [Errno 104] Connection reset by peer |
||||||||||||||
dead | 7321360 | 2023-06-29 12:44:10 | 2023-06-29 14:35:02 | 2023-06-29 14:40:51 | 0:05:49 | smithi | main | ubuntu | 22.04 | fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-ec-root overrides/{frag ignorelist_health ignorelist_wrongly_marked_down prefetch_dirfrags/no prefetch_entire_dirfrags/no races session_timeout thrashosds-health} ranks/3 tasks/{1-thrash/mds 2-workunit/suites/pjd}} | 2 | |||
Failure Reason:
Error reimaging machines: [Errno 104] Connection reset by peer |
||||||||||||||
pass | 7321361 | 2023-06-29 12:44:10 | 2023-06-29 14:35:23 | 2023-06-29 15:03:27 | 0:28:04 | 0:15:17 | 0:12:47 | smithi | main | ubuntu | 20.04 | fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{ubuntu_20.04} mount/fuse objectstore/bluestore-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/client-readahead} | 2 |