User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Pass | Fail | Dead |
---|---|---|---|---|---|---|---|---|---|---|---|
vshankar | 2022-10-20 10:31:38 | 2022-10-20 12:23:32 | 2022-10-21 06:00:25 | 17:36:53 | fs | wip-vshankar-testing1-20221017-112130 | smithi | 341cd46 | 37 | 56 | 17 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
fail | 7075360 | 2022-10-20 10:31:46 | 2022-10-20 12:23:32 | 2022-10-20 13:21:22 | 0:57:50 | 0:46:53 | 0:10:57 | smithi | main | centos | 8.stream | fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/stock/{centos_latest k-stock} ms-die-on-skipped}} objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/snap-schedule} | 2 | |
Failure Reason:
"1666269909.3946168 mon.a (mon.0) 462 : cluster [WRN] Health check failed: Reduced data availability: 1 pg inactive, 1 pg peering (PG_AVAILABILITY)" in cluster log |
||||||||||||||
fail | 7075361 | 2022-10-20 10:31:47 | 2022-10-20 12:28:23 | 2022-10-20 13:12:10 | 0:43:47 | 0:29:11 | 0:14:36 | smithi | main | centos | 8.stream | fs/workload/{0-centos_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/legacy wsync/no} objectstore-ec/bluestore-bitmap omap_limit/10 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{export-check n/3 replication/default} standby-replay tasks/{0-subvolume/{with-quota} 1-check-counter 2-scrub/yes 3-snaps/yes 4-flush/no 5-workunit/fs/test_o_trunc}} | 3 | |
Failure Reason:
Command failed on smithi033 with status 124: "sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph fs subvolume create cephfs sv_0 ''" |
||||||||||||||
pass | 7075362 | 2022-10-20 10:31:48 | 2022-10-20 12:31:03 | 2022-10-20 13:16:33 | 0:45:30 | 0:22:19 | 0:23:11 | smithi | main | rhel | 8.6 | fs/32bits/{begin/{0-install 1-ceph 2-logrotate} clusters/fixed-2-ucephfs conf/{client mds mon osd} distro/{rhel_8} mount/fuse objectstore-ec/bluestore-comp overrides/{faked-ino ignorelist_health ignorelist_wrongly_marked_down} tasks/cfuse_workunit_suites_pjd} | 2 | |
pass | 7075363 | 2022-10-20 10:31:49 | 2022-10-20 12:41:35 | 2022-10-20 13:17:59 | 0:36:24 | 0:22:50 | 0:13:34 | smithi | main | rhel | 8.6 | fs/permission/{begin/{0-install 1-ceph 2-logrotate} clusters/fixed-2-ucephfs conf/{client mds mon osd} distro/{rhel_8} mount/fuse objectstore-ec/bluestore-comp overrides/{ignorelist_health ignorelist_wrongly_marked_down} tasks/cfuse_workunit_suites_pjd} | 2 | |
fail | 7075364 | 2022-10-20 10:31:50 | 2022-10-20 12:43:16 | 2022-10-20 16:05:47 | 3:22:31 | 3:10:21 | 0:12:10 | smithi | main | rhel | 8.6 | fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} msgr-failures/none objectstore-ec/bluestore-comp-ec-root overrides/{frag ignorelist_health ignorelist_wrongly_marked_down prefetch_dirfrags/no prefetch_entire_dirfrags/no races session_timeout thrashosds-health} ranks/5 tasks/{1-thrash/osd 2-workunit/fs/snaps}} | 2 | |
Failure Reason:
"1666272641.8679059 mds.c (mds.0) 1 : cluster [WRN] client.4721 isn't responding to mclientcaps(revoke), ino 0x100000031fa pending pAsxXsxFc issued pAsxXsxFxcwb, sent 300.004450 seconds ago" in cluster log |
||||||||||||||
pass | 7075365 | 2022-10-20 10:31:51 | 2022-10-20 12:53:14 | 2022-10-20 13:25:27 | 0:32:13 | 0:19:54 | 0:12:19 | smithi | main | rhel | 8.6 | fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} objectstore/bluestore-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/snap_schedule_snapdir} | 2 | |
dead | 7075366 | 2022-10-20 10:31:52 | 2022-10-20 12:54:35 | 2022-10-21 01:04:48 | 12:10:13 | smithi | main | ubuntu | 20.04 | fs/verify/{begin/{0-install 1-ceph 2-logrotate} centos_8 clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu/{latest overrides}} mount/fuse objectstore-ec/bluestore-comp-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down mon-debug session_timeout} ranks/3 tasks/dbench validater/valgrind} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 7075367 | 2022-10-20 10:31:53 | 2022-10-20 12:55:15 | 2022-10-20 13:36:24 | 0:41:09 | 0:29:03 | 0:12:06 | smithi | main | centos | 8.stream | fs/workload/{0-centos_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/fuse objectstore-ec/bluestore-comp omap_limit/10 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{export-check n/3 replication/always} standby-replay tasks/{0-subvolume/{with-no-extra-options} 1-check-counter 2-scrub/yes 3-snaps/yes 4-flush/no 5-workunit/suites/iozone}} | 3 | |
Failure Reason:
Command failed on smithi137 with status 124: "sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph fs subvolume create cephfs sv_0 ''" |
||||||||||||||
fail | 7075368 | 2022-10-20 10:31:54 | 2022-10-20 12:55:16 | 2022-10-20 13:25:00 | 0:29:44 | 0:17:48 | 0:11:56 | smithi | main | ubuntu | 20.04 | fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/fuse objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/snapshots} | 2 | |
Failure Reason:
Test failure: test_kill_mdstable (tasks.cephfs.test_snapshots.TestSnapshots) |
||||||||||||||
fail | 7075369 | 2022-10-20 10:31:55 | 2022-10-20 12:55:36 | 2022-10-20 13:30:36 | 0:35:00 | 0:22:59 | 0:12:01 | smithi | main | centos | 8.stream | fs/upgrade/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} fail_fs/yes overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn syntax} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-inline/yes 4-verify} 2-client 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} | 2 | |
Failure Reason:
Command failed on smithi143 with status 22: "sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v16.2.4 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d75aa2e0-5079-11ed-8437-001a4aab830c -e sha1=341cd46c8de24705ee92901c06b35c24133f2afa -- bash -c 'ceph orch upgrade start --image quay.ceph.io/ceph-ci/ceph:$sha1 --daemon-types mgr'" |
||||||||||||||
fail | 7075370 | 2022-10-20 10:31:56 | 2022-10-20 12:55:36 | 2022-10-20 13:29:22 | 0:33:46 | 0:22:21 | 0:11:25 | smithi | main | centos | 8.stream | fs/workload/{0-centos_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/stock/{centos_latest k-stock} ms-die-on-skipped}} ms_mode/crc wsync/yes} objectstore-ec/bluestore-ec-root omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{export-check n/5 replication/default} standby-replay tasks/{0-subvolume/{with-namespace-isolated-and-quota} 1-check-counter 2-scrub/no 3-snaps/no 4-flush/yes 5-workunit/suites/pjd}} | 3 | |
Failure Reason:
Command failed on smithi084 with status 124: "sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph fs subvolume create cephfs sv_0 ''" |
||||||||||||||
pass | 7075371 | 2022-10-20 10:31:57 | 2022-10-20 13:00:27 | 2022-10-20 13:37:57 | 0:37:30 | 0:21:22 | 0:16:08 | smithi | main | rhel | 8.6 | fs/thrash/multifs/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-2c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} msgr-failures/none objectstore/bluestore-bitmap overrides/{frag ignorelist_health ignorelist_wrongly_marked_down multifs session_timeout thrashosds-health} tasks/{1-thrash/mds 2-workunit/cfuse_workunit_trivial_sync}} | 2 | |
pass | 7075372 | 2022-10-20 10:31:58 | 2022-10-20 13:04:58 | 2022-10-20 13:50:34 | 0:45:36 | 0:28:10 | 0:17:26 | smithi | main | centos | 8.stream | fs/workload/{0-centos_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/legacy wsync/no} objectstore-ec/bluestore-bitmap omap_limit/10 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{export-check n/5 replication/default} standby-replay tasks/{0-subvolume/{with-no-extra-options} 1-check-counter 2-scrub/no 3-snaps/no 4-flush/no 5-workunit/direct_io}} | 3 | |
fail | 7075373 | 2022-10-20 10:31:59 | 2022-10-20 13:10:40 | 2022-10-20 14:31:04 | 1:20:24 | 1:11:31 | 0:08:53 | smithi | main | centos | 8.stream | fs/workload/{0-centos_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/stock/{centos_latest k-stock} ms-die-on-skipped}} ms_mode/secure wsync/yes} objectstore-ec/bluestore-comp-ec-root omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{export-check n/5 replication/always} standby-replay tasks/{0-subvolume/{with-namespace-isolated} 1-check-counter 2-scrub/yes 3-snaps/yes 4-flush/yes 5-workunit/fs/misc}} | 3 | |
Failure Reason:
Command failed on smithi078 with status 110: "sudo TESTDIR=/home/ubuntu/cephtest bash -c 'ceph fs snap-schedule status --fs=cephfs --path=/'" |
||||||||||||||
dead | 7075374 | 2022-10-20 10:32:00 | 2022-10-20 13:12:10 | 2022-10-21 01:23:12 | 12:11:02 | smithi | main | rhel | 8.6 | fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{rhel_8} mount/fuse msgr-failures/osd-mds-delay objectstore-ec/bluestore-comp overrides/{frag ignorelist_health ignorelist_wrongly_marked_down prefetch_dirfrags/yes prefetch_entire_dirfrags/yes races session_timeout thrashosds-health} ranks/1 tasks/{1-thrash/mds 2-workunit/suites/ffsb}} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
dead | 7075375 | 2022-10-20 10:32:01 | 2022-10-20 13:12:21 | 2022-10-21 01:28:02 | 12:15:41 | smithi | main | centos | 8.stream | fs/workload/{0-centos_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/fuse objectstore-ec/bluestore-comp omap_limit/10 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{export-check n/3 replication/default} standby-replay tasks/{0-subvolume/{with-quota} 1-check-counter 2-scrub/no 3-snaps/no 4-flush/no 5-workunit/kernel_untar_build}} | 3 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 7075376 | 2022-10-20 10:32:02 | 2022-10-20 13:16:42 | 2022-10-20 14:43:11 | 1:26:29 | 1:15:24 | 0:11:05 | smithi | main | rhel | 8.6 | fs/volumes/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/volumes/{overrides test/clone}} | 2 | |
Failure Reason:
"1666275048.696779 mon.a (mon.0) 2458 : cluster [WRN] Health check failed: Degraded data redundancy: 1 pg degraded (PG_DEGRADED)" in cluster log |
||||||||||||||
pass | 7075377 | 2022-10-20 10:32:03 | 2022-10-20 13:18:02 | 2022-10-20 13:51:31 | 0:33:29 | 0:20:32 | 0:12:57 | smithi | main | rhel | 8.6 | fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{rhel_8} mount/fuse objectstore/bluestore-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/truncate_delay} | 2 | |
pass | 7075378 | 2022-10-20 10:32:04 | 2022-10-20 13:20:43 | 2022-10-20 14:16:24 | 0:55:41 | 0:44:39 | 0:11:02 | smithi | main | centos | 8.stream | fs/workload/{0-centos_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/legacy wsync/no} objectstore-ec/bluestore-bitmap omap_limit/10 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{export-check n/3 replication/always} standby-replay tasks/{0-subvolume/{with-namespace-isolated-and-quota} 1-check-counter 2-scrub/no 3-snaps/no 4-flush/no 5-workunit/suites/blogbench}} | 3 | |
fail | 7075379 | 2022-10-20 10:32:05 | 2022-10-20 13:21:23 | 2022-10-20 13:54:12 | 0:32:49 | 0:22:28 | 0:10:21 | smithi | main | centos | 8.stream | fs/workload/{0-centos_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/stock/{centos_latest k-stock} ms-die-on-skipped}} ms_mode/secure wsync/yes} objectstore-ec/bluestore-comp-ec-root omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{export-check n/5 replication/default} standby-replay tasks/{0-subvolume/{with-namespace-isolated} 1-check-counter 2-scrub/yes 3-snaps/yes 4-flush/yes 5-workunit/suites/dbench}} | 3 | |
Failure Reason:
Command failed on smithi186 with status 124: "sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph fs subvolume create cephfs sv_0 ''" |
||||||||||||||
dead | 7075380 | 2022-10-20 10:32:06 | 2022-10-20 13:25:04 | 2022-10-21 01:36:56 | 12:11:52 | smithi | main | centos | 8.stream | fs/workload/{0-centos_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/fuse objectstore-ec/bluestore-comp omap_limit/10 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{export-check n/5 replication/always} standby-replay tasks/{0-subvolume/{no-subvolume} 1-check-counter 2-scrub/no 3-snaps/no 4-flush/no 5-workunit/suites/ffsb}} | 3 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 7075381 | 2022-10-20 10:32:07 | 2022-10-20 13:25:35 | 2022-10-20 14:28:16 | 1:02:41 | 0:51:26 | 0:11:15 | smithi | main | centos | 8.stream | fs/workload/{0-centos_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/stock/{centos_latest k-stock} ms-die-on-skipped}} ms_mode/crc wsync/yes} objectstore-ec/bluestore-ec-root omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{export-check n/3 replication/default} standby-replay tasks/{0-subvolume/{with-namespace-isolated} 1-check-counter 2-scrub/yes 3-snaps/yes 4-flush/yes 5-workunit/fs/norstats}} | 3 | |
Failure Reason:
Command failed on smithi084 with status 110: "sudo TESTDIR=/home/ubuntu/cephtest bash -c 'ceph fs snap-schedule status --fs=cephfs --path=/'" |
||||||||||||||
fail | 7075382 | 2022-10-20 10:32:08 | 2022-10-20 13:29:26 | 2022-10-20 14:13:55 | 0:44:29 | 0:31:15 | 0:13:14 | smithi | main | centos | 8.stream | fs/workload/{0-centos_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/fuse objectstore-ec/bluestore-comp omap_limit/10 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{export-check n/5 replication/default} standby-replay tasks/{0-subvolume/{with-namespace-isolated} 1-check-counter 2-scrub/no 3-snaps/no 4-flush/no 5-workunit/suites/fsync-tester}} | 3 | |
Failure Reason:
Command failed on smithi053 with status 124: "sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph fs subvolume create cephfs sv_0 ''" |
||||||||||||||
pass | 7075383 | 2022-10-20 10:32:09 | 2022-10-20 14:43:23 | 2022-10-20 15:39:00 | 0:55:37 | 0:33:44 | 0:21:53 | smithi | main | centos | 8.stream | fs/workload/{0-centos_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/stock/{centos_latest k-stock} ms-die-on-skipped}} ms_mode/crc wsync/yes} objectstore-ec/bluestore-ec-root omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{export-check n/5 replication/always} standby-replay tasks/{0-subvolume/{no-subvolume} 1-check-counter 2-scrub/yes 3-snaps/yes 4-flush/yes 5-workunit/fs/test_o_trunc}} | 3 | |
pass | 7075384 | 2022-10-20 10:32:10 | 2022-10-20 14:57:35 | 2022-10-20 15:28:47 | 0:31:12 | 0:19:56 | 0:11:16 | smithi | main | ubuntu | 20.04 | fs/verify/{begin/{0-install 1-ceph 2-logrotate} centos_8 clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu/{latest overrides}} mount/kclient/{k-testing mount ms-die-on-skipped} objectstore-ec/bluestore-comp overrides/{ignorelist_health ignorelist_wrongly_marked_down mon-debug session_timeout} ranks/5 tasks/fsstress validater/lockdep} | 2 | |
pass | 7075385 | 2022-10-20 10:32:11 | 2022-10-20 14:59:16 | 2022-10-20 15:36:17 | 0:37:01 | 0:25:52 | 0:11:09 | smithi | main | rhel | 8.6 | fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{rhel_8} mount/fuse msgr-failures/none objectstore-ec/bluestore-comp-ec-root overrides/{frag ignorelist_health ignorelist_wrongly_marked_down prefetch_dirfrags/no prefetch_entire_dirfrags/no races session_timeout thrashosds-health} ranks/1 tasks/{1-thrash/mds 2-workunit/suites/iozone}} | 2 | |
pass | 7075386 | 2022-10-20 10:32:12 | 2022-10-20 14:59:16 | 2022-10-20 15:30:48 | 0:31:32 | 0:20:42 | 0:10:50 | smithi | main | rhel | 8.6 | fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/alternate-pool} | 2 | |
fail | 7075387 | 2022-10-20 10:32:13 | 2022-10-20 14:59:37 | 2022-10-20 15:33:13 | 0:33:36 | 0:22:53 | 0:10:43 | smithi | main | centos | 8.stream | fs/upgrade/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} fail_fs/yes overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn syntax} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-inline/yes 4-verify} 2-client 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} | 2 | |
Failure Reason:
Command failed on smithi035 with status 22: "sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v16.2.4 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 2dcc0be4-508b-11ed-8437-001a4aab830c -e sha1=341cd46c8de24705ee92901c06b35c24133f2afa -- bash -c 'ceph orch upgrade start --image quay.ceph.io/ceph-ci/ceph:$sha1 --daemon-types mgr'" |
||||||||||||||
fail | 7075388 | 2022-10-20 10:32:14 | 2022-10-20 15:00:17 | 2022-10-20 15:44:20 | 0:44:03 | 0:30:56 | 0:13:07 | smithi | main | centos | 8.stream | fs/workload/{0-centos_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/fuse objectstore-ec/bluestore-comp omap_limit/10 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{export-check n/3 replication/always} standby-replay tasks/{0-subvolume/{with-namespace-isolated-and-quota} 1-check-counter 2-scrub/no 3-snaps/no 4-flush/no 5-workunit/suites/pjd}} | 3 | |
Failure Reason:
Command failed on smithi037 with status 124: "sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph fs subvolume create cephfs sv_0 ''" |
||||||||||||||
pass | 7075389 | 2022-10-20 10:32:15 | 2022-10-20 15:01:18 | 2022-10-20 15:46:15 | 0:44:57 | 0:32:54 | 0:12:03 | smithi | main | rhel | 8.6 | fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{rhel_8} mount/fuse objectstore/bluestore-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/asok_dump_tree} | 2 | |
pass | 7075390 | 2022-10-20 10:32:16 | 2022-10-20 15:01:18 | 2022-10-20 15:33:12 | 0:31:54 | 0:19:22 | 0:12:32 | smithi | main | rhel | 8.6 | fs/libcephfs/{begin/{0-install 1-ceph 2-logrotate} clusters/1-mds-1-client-coloc conf/{client mds mon osd} distro/{rhel_8} objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/client} | 2 | |
fail | 7075391 | 2022-10-20 10:32:17 | 2022-10-20 15:02:09 | 2022-10-20 17:33:25 | 2:31:16 | 2:19:32 | 0:11:44 | smithi | main | rhel | 8.6 | fs/mirror/{begin/{0-install 1-ceph 2-logrotate} cephfs-mirror/one-per-cluster clients/{mirror} cluster/{1-node} mount/fuse objectstore/bluestore-bitmap overrides/{whitelist_health} supported-random-distros$/{rhel_8} tasks/mirror} | 1 | |
Failure Reason:
"1666283469.9059691 mon.a (mon.0) 4107 : cluster [WRN] Health check failed: 0 slow ops, oldest one blocked for 40 sec, osd.2 has slow ops (SLOW_OPS)" in cluster log |
||||||||||||||
dead | 7075392 | 2022-10-20 10:32:18 | 2022-10-20 15:02:09 | 2022-10-21 03:13:52 | 12:11:43 | smithi | main | ubuntu | 20.04 | fs/multifs/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-2c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/fuse objectstore-ec/bluestore-comp overrides/{ignorelist_health ignorelist_wrongly_marked_down mon-debug} tasks/failover} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 7075393 | 2022-10-20 10:32:19 | 2022-10-20 15:03:50 | 2022-10-20 16:31:24 | 1:27:34 | 1:17:31 | 0:10:03 | smithi | main | rhel | 8.6 | fs/snaps/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-1c-client conf/{client mds mon osd} distro/{rhel_8} mount/fuse objectstore-ec/bluestore-comp overrides/{ignorelist_health ignorelist_wrongly_marked_down} tasks/workunit/snaps} | 2 | |
fail | 7075394 | 2022-10-20 10:32:20 | 2022-10-20 15:04:01 | 2022-10-20 15:34:38 | 0:30:37 | 0:19:20 | 0:11:17 | smithi | main | rhel | 8.6 | fs/top/{begin/{0-install 1-ceph 2-logrotate} cluster/{1-node} mount/fuse objectstore/bluestore-bitmap overrides/ignorelist_health supported-random-distros$/{rhel_8} tasks/fstop} | 1 | |
Failure Reason:
SELinux denials found on ubuntu@smithi044.front.sepia.ceph.com: ['type=AVC msg=audit(1666278795.371:203): avc: denied { node_bind } for pid=1452 comm="ping" saddr=172.21.15.44 scontext=system_u:system_r:ping_t:s0 tcontext=system_u:object_r:node_t:s0 tclass=icmp_socket permissive=1'] |
||||||||||||||
fail | 7075395 | 2022-10-20 10:32:21 | 2022-10-20 15:04:01 | 2022-10-20 16:52:36 | 1:48:35 | 1:39:15 | 0:09:20 | smithi | main | centos | 8.stream | fs/valgrind/{begin/{0-install 1-ceph 2-logrotate} centos_latest debug mirror/{cephfs-mirror/one-per-cluster clients/mirror cluster/1-node mount/fuse overrides/whitelist_health tasks/mirror}} | 1 | |
Failure Reason:
Test failure: test_cephfs_mirror_restart_sync_on_blocklist (tasks.cephfs.test_mirroring.TestMirroring) |
||||||||||||||
fail | 7075396 | 2022-10-20 10:32:22 | 2022-10-20 15:04:02 | 2022-10-20 15:36:40 | 0:32:38 | 0:25:18 | 0:07:20 | smithi | main | centos | 8.stream | fs/workload/{0-centos_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/stock/{centos_latest k-stock} ms-die-on-skipped}} ms_mode/crc wsync/yes} objectstore-ec/bluestore-ec-root omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{export-check n/5 replication/default} standby-replay tasks/{0-subvolume/{with-no-extra-options} 1-check-counter 2-scrub/yes 3-snaps/yes 4-flush/yes 5-workunit/direct_io}} | 3 | |
Failure Reason:
Command failed on smithi079 with status 124: "sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph fs subvolume create cephfs sv_0 ''" |
||||||||||||||
fail | 7075397 | 2022-10-20 10:32:23 | 2022-10-20 15:04:32 | 2022-10-20 15:43:28 | 0:38:56 | 0:27:32 | 0:11:24 | smithi | main | centos | 8.stream | fs/workload/{0-centos_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/legacy wsync/no} objectstore-ec/bluestore-bitmap omap_limit/10 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{export-check n/5 replication/always} standby-replay tasks/{0-subvolume/{with-quota} 1-check-counter 2-scrub/no 3-snaps/no 4-flush/no 5-workunit/fs/misc}} | 3 | |
Failure Reason:
Command failed on smithi059 with status 124: "sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph fs subvolume create cephfs sv_0 ''" |
||||||||||||||
fail | 7075398 | 2022-10-20 10:32:24 | 2022-10-20 15:05:13 | 2022-10-20 15:44:50 | 0:39:37 | 0:29:24 | 0:10:13 | smithi | main | fs/mixed-clients/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-2c-client conf/{client mds mon osd} kclient-overrides/{distro/testing/k-testing ms-die-on-skipped} objectstore-ec/bluestore-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down osd-asserts} tasks/kernel_cfuse_workunits_untarbuild_blogbench} | 2 | |||
Failure Reason:
Command failed (workunit test kernel_untar_build.sh) on smithi081 with status 2: 'mkdir -p -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && cd -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=341cd46c8de24705ee92901c06b35c24133f2afa TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="1" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.1 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.1 CEPH_MNT=/home/ubuntu/cephtest/mnt.1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.1/qa/workunits/kernel_untar_build.sh' |
||||||||||||||
fail | 7075399 | 2022-10-20 10:32:25 | 2022-10-20 15:05:13 | 2022-10-20 16:22:53 | 1:17:40 | 1:10:25 | 0:07:15 | smithi | main | centos | 8.stream | fs/workload/{0-centos_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/stock/{centos_latest k-stock} ms-die-on-skipped}} ms_mode/secure wsync/yes} objectstore-ec/bluestore-comp-ec-root omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{export-check n/3 replication/default} standby-replay tasks/{0-subvolume/{with-namespace-isolated} 1-check-counter 2-scrub/yes 3-snaps/yes 4-flush/yes 5-workunit/kernel_untar_build}} | 3 | |
Failure Reason:
Command failed on smithi017 with status 110: "sudo TESTDIR=/home/ubuntu/cephtest bash -c 'ceph fs snap-schedule status --fs=cephfs --path=/'" |
||||||||||||||
fail | 7075400 | 2022-10-20 10:32:26 | 2022-10-20 15:05:24 | 2022-10-20 15:35:20 | 0:29:56 | 0:16:03 | 0:13:53 | smithi | main | ubuntu | 20.04 | fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} objectstore/bluestore-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/backtrace} | 2 | |
Failure Reason:
Test failure: test_backtrace (tasks.cephfs.test_backtrace.TestBacktrace) |
||||||||||||||
fail | 7075401 | 2022-10-20 10:32:27 | 2022-10-20 15:07:45 | 2022-10-20 16:25:47 | 1:18:02 | 1:09:32 | 0:08:30 | smithi | main | centos | 8.stream | fs/workload/{0-centos_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/stock/{centos_latest k-stock} ms-die-on-skipped}} ms_mode/crc wsync/yes} objectstore-ec/bluestore-ec-root omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{export-check n/3 replication/always} standby-replay tasks/{0-subvolume/{no-subvolume} 1-check-counter 2-scrub/yes 3-snaps/yes 4-flush/yes 5-workunit/suites/blogbench}} | 3 | |
Failure Reason:
error during scrub thrashing: rank damage found: {'backtrace'} |
||||||||||||||
dead | 7075402 | 2022-10-20 10:32:28 | 2022-10-20 15:08:35 | 2022-10-21 03:19:46 | 12:11:11 | smithi | main | centos | 8.stream | fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{centos_8} mount/fuse objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/cap-flush} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 7075403 | 2022-10-20 10:32:29 | 2022-10-20 15:08:56 | 2022-10-20 15:46:54 | 0:37:58 | 0:26:54 | 0:11:04 | smithi | main | centos | 8.stream | fs/workload/{0-centos_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/legacy wsync/no} objectstore-ec/bluestore-bitmap omap_limit/10 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{export-check n/5 replication/default} standby-replay tasks/{0-subvolume/{with-namespace-isolated} 1-check-counter 2-scrub/no 3-snaps/no 4-flush/no 5-workunit/suites/dbench}} | 3 | |
Failure Reason:
Command failed on smithi005 with status 124: "sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph fs subvolume create cephfs sv_0 ''" |
||||||||||||||
pass | 7075404 | 2022-10-20 10:32:30 | 2022-10-20 15:09:26 | 2022-10-20 16:53:59 | 1:44:33 | 1:30:48 | 0:13:45 | smithi | main | rhel | 8.6 | fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} msgr-failures/none objectstore-ec/bluestore-ec-root overrides/{frag ignorelist_health ignorelist_wrongly_marked_down prefetch_dirfrags/yes prefetch_entire_dirfrags/no races session_timeout thrashosds-health} ranks/5 tasks/{1-thrash/mon 2-workunit/fs/snaps}} | 2 | |
fail | 7075405 | 2022-10-20 10:32:30 | 2022-10-20 15:12:47 | 2022-10-20 16:58:44 | 1:45:57 | 1:35:44 | 0:10:13 | smithi | main | centos | 8.stream | fs/workload/{0-centos_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/stock/{centos_latest k-stock} ms-die-on-skipped}} ms_mode/secure wsync/yes} objectstore-ec/bluestore-comp-ec-root omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{export-check n/5 replication/always} standby-replay tasks/{0-subvolume/{with-namespace-isolated} 1-check-counter 2-scrub/yes 3-snaps/yes 4-flush/yes 5-workunit/suites/ffsb}} | 3 | |
Failure Reason:
Command failed on smithi084 with status 110: "sudo TESTDIR=/home/ubuntu/cephtest bash -c 'ceph fs snap-schedule status --fs=cephfs --path=/'" |
||||||||||||||
dead | 7075406 | 2022-10-20 10:32:31 | 2022-10-20 15:15:38 | 2022-10-21 03:37:18 | 12:21:40 | smithi | main | centos | 8.stream | fs/workload/{0-centos_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/fuse objectstore-ec/bluestore-comp omap_limit/10 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{export-check n/3 replication/default} standby-replay tasks/{0-subvolume/{with-namespace-isolated-and-quota} 1-check-counter 2-scrub/no 3-snaps/no 4-flush/no 5-workunit/fs/norstats}} | 3 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 7075407 | 2022-10-20 10:32:32 | 2022-10-20 15:25:50 | 2022-10-20 16:12:06 | 0:46:16 | 0:36:46 | 0:09:30 | smithi | main | centos | 8.stream | fs/workload/{0-centos_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/stock/{centos_latest k-stock} ms-die-on-skipped}} ms_mode/crc wsync/yes} objectstore-ec/bluestore-ec-root omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/1 standby-replay tasks/{0-subvolume/{with-quota} 1-check-counter 2-scrub/yes 3-snaps/yes 4-flush/yes 5-workunit/suites/fsstress}} | 3 | |
fail | 7075408 | 2022-10-20 10:32:33 | 2022-10-20 15:28:51 | 2022-10-20 16:09:43 | 0:40:52 | 0:26:56 | 0:13:56 | smithi | main | centos | 8.stream | fs/workload/{0-centos_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/legacy wsync/no} objectstore-ec/bluestore-bitmap omap_limit/10 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{export-check n/3 replication/always} standby-replay tasks/{0-subvolume/{with-quota} 1-check-counter 2-scrub/no 3-snaps/no 4-flush/no 5-workunit/suites/fsx}} | 3 | |
Failure Reason:
Command failed on smithi040 with status 124: "sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph fs subvolume create cephfs sv_0 ''" |
||||||||||||||
pass | 7075409 | 2022-10-20 10:32:35 | 2022-10-20 16:05:58 | 1357 | smithi | main | rhel | 8.6 | fs/permission/{begin/{0-install 1-ceph 2-logrotate} clusters/fixed-2-ucephfs conf/{client mds mon osd} distro/{rhel_8} mount/fuse objectstore-ec/bluestore-comp-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down} tasks/cfuse_workunit_suites_pjd} | 2 | ||||
fail | 7075410 | 2022-10-20 10:32:36 | 2022-10-20 15:32:42 | 2022-10-20 16:02:19 | 0:29:37 | 0:21:48 | 0:07:49 | smithi | main | centos | 8.stream | fs/workload/{0-centos_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/stock/{centos_latest k-stock} ms-die-on-skipped}} ms_mode/secure wsync/yes} objectstore-ec/bluestore-comp-ec-root omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{export-check n/5 replication/default} standby-replay tasks/{0-subvolume/{with-namespace-isolated} 1-check-counter 2-scrub/yes 3-snaps/yes 4-flush/yes 5-workunit/suites/fsync-tester}} | 3 | |
Failure Reason:
Command failed on smithi035 with status 124: "sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph fs subvolume create cephfs sv_0 ''" |
||||||||||||||
pass | 7075411 | 2022-10-20 10:32:36 | 2022-10-20 15:33:22 | 2022-10-20 16:31:06 | 0:57:44 | 0:46:01 | 0:11:43 | smithi | main | centos | 8.stream | fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{centos_8} mount/fuse objectstore/bluestore-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/client-recovery} | 2 | |
pass | 7075412 | 2022-10-20 10:32:37 | 2022-10-20 15:33:43 | 2022-10-20 16:47:48 | 1:14:05 | 1:02:08 | 0:11:57 | smithi | main | rhel | 8.6 | fs/verify/{begin/{0-install 1-ceph 2-logrotate} centos_8 clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{rhel_8} mount/fuse objectstore-ec/bluestore-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down mon-debug session_timeout} ranks/1 tasks/dbench validater/valgrind} | 2 | |
dead | 7075413 | 2022-10-20 10:32:38 | 2022-10-20 15:33:43 | 2022-10-21 03:45:29 | 12:11:46 | smithi | main | centos | 8.stream | fs/workload/{0-centos_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/fuse objectstore-ec/bluestore-comp omap_limit/10 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{export-check n/5 replication/always} standby-replay tasks/{0-subvolume/{no-subvolume} 1-check-counter 2-scrub/no 3-snaps/no 4-flush/no 5-workunit/fs/test_o_trunc}} | 3 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 7075414 | 2022-10-20 10:32:39 | 2022-10-20 15:35:24 | 2022-10-20 17:13:57 | 1:38:33 | 1:30:32 | 0:08:01 | smithi | main | centos | 8.stream | fs/workload/{0-centos_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/stock/{centos_latest k-stock} ms-die-on-skipped}} ms_mode/crc wsync/yes} objectstore-ec/bluestore-ec-root omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{export-check n/3 replication/default} standby-replay tasks/{0-subvolume/{with-namespace-isolated-and-quota} 1-check-counter 2-scrub/yes 3-snaps/yes 4-flush/yes 5-workunit/suites/iogen}} | 3 | |
Failure Reason:
Command failed on smithi079 with status 110: "sudo TESTDIR=/home/ubuntu/cephtest bash -c 'ceph fs snap-schedule status --fs=cephfs --path=/'" |
||||||||||||||
fail | 7075415 | 2022-10-20 10:32:40 | 2022-10-20 15:36:44 | 2022-10-20 16:25:53 | 0:49:09 | 0:39:22 | 0:09:47 | smithi | main | rhel | 8.6 | fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} objectstore/bluestore-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/data-scan} | 2 | |
Failure Reason:
"1666281742.8155503 mon.a (mon.0) 218 : cluster [WRN] Health check failed: Degraded data redundancy: 2/4 objects degraded (50.000%), 1 pg degraded (PG_DEGRADED)" in cluster log |
||||||||||||||
fail | 7075416 | 2022-10-20 10:32:42 | 2022-10-20 15:36:45 | 2022-10-20 16:08:29 | 0:31:44 | 0:22:00 | 0:09:44 | smithi | main | centos | 8.stream | fs/workload/{0-centos_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/stock/{centos_latest k-stock} ms-die-on-skipped}} ms_mode/secure wsync/yes} objectstore-ec/bluestore-comp-ec-root omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{export-check n/3 replication/always} standby-replay tasks/{0-subvolume/{with-no-extra-options} 1-check-counter 2-scrub/yes 3-snaps/yes 4-flush/yes 5-workunit/suites/pjd}} | 3 | |
Failure Reason:
Command failed on smithi063 with status 124: "sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph fs subvolume create cephfs sv_0 ''" |
||||||||||||||
fail | 7075417 | 2022-10-20 10:32:43 | 2022-10-20 15:39:06 | 2022-10-20 16:19:41 | 0:40:35 | 0:23:02 | 0:17:33 | smithi | main | centos | 8.stream | fs/upgrade/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} fail_fs/yes overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn syntax} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-inline/yes 4-verify} 2-client 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} | 2 | |
Failure Reason:
Command failed on smithi066 with status 22: "sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v16.2.4 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 86f63504-5091-11ed-8437-001a4aab830c -e sha1=341cd46c8de24705ee92901c06b35c24133f2afa -- bash -c 'ceph orch upgrade start --image quay.ceph.io/ceph-ci/ceph:$sha1 --daemon-types mgr'" |
||||||||||||||
dead | 7075418 | 2022-10-20 10:32:44 | 2022-10-20 15:43:37 | 2022-10-21 03:53:52 | 12:10:15 | smithi | main | centos | 8.stream | fs/workload/{0-centos_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/fuse objectstore-ec/bluestore-comp omap_limit/10 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{export-check n/5 replication/default} standby-replay tasks/{0-subvolume/{with-no-extra-options} 1-check-counter 2-scrub/no 3-snaps/yes 4-flush/no 5-workunit/direct_io}} | 3 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
dead | 7075419 | 2022-10-20 10:32:45 | 2022-10-20 15:44:27 | 2022-10-21 03:54:45 | 12:10:18 | smithi | main | ubuntu | 20.04 | fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/fuse objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/exports} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 7075420 | 2022-10-20 10:32:46 | 2022-10-20 15:44:58 | 2022-10-20 16:15:11 | 0:30:13 | 0:22:47 | 0:07:26 | smithi | main | centos | 8.stream | fs/workload/{0-centos_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/stock/{centos_latest k-stock} ms-die-on-skipped}} ms_mode/crc wsync/yes} objectstore-ec/bluestore-ec-root omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{export-check n/5 replication/always} standby-replay tasks/{0-subvolume/{with-quota} 1-check-counter 2-scrub/yes 3-snaps/no 4-flush/yes 5-workunit/fs/misc}} | 3 | |
Failure Reason:
Command failed on smithi006 with status 124: "sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph fs subvolume create cephfs sv_0 ''" |
||||||||||||||
pass | 7075421 | 2022-10-20 10:32:47 | 2022-10-20 15:46:18 | 2022-10-20 17:00:18 | 1:14:00 | 1:02:51 | 0:11:09 | smithi | main | centos | 8.stream | fs/workload/{0-centos_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/legacy wsync/no} objectstore-ec/bluestore-bitmap omap_limit/10 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{export-check n/3 replication/default} standby-replay tasks/{0-subvolume/{with-quota} 1-check-counter 2-scrub/no 3-snaps/yes 4-flush/no 5-workunit/kernel_untar_build}} | 3 | |
pass | 7075422 | 2022-10-20 10:32:48 | 2022-10-20 15:46:59 | 2022-10-20 16:39:09 | 0:52:10 | 0:27:44 | 0:24:26 | smithi | main | rhel | 8.6 | fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{rhel_8} mount/fuse msgr-failures/none objectstore-ec/bluestore-ec-root overrides/{frag ignorelist_health ignorelist_wrongly_marked_down prefetch_dirfrags/yes prefetch_entire_dirfrags/no races session_timeout thrashosds-health} ranks/1 tasks/{1-thrash/osd 2-workunit/suites/iozone}} | 2 | |
pass | 7075423 | 2022-10-20 10:32:48 | 2022-10-20 16:02:21 | 2022-10-20 16:42:20 | 0:39:59 | 0:26:35 | 0:13:24 | smithi | main | rhel | 8.6 | fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/fragment} | 2 | |
fail | 7075424 | 2022-10-20 10:32:49 | 2022-10-20 16:05:52 | 2022-10-20 16:47:26 | 0:41:34 | 0:29:38 | 0:11:56 | smithi | main | centos | 8.stream | fs/workload/{0-centos_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/fuse objectstore-ec/bluestore-comp omap_limit/10 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{export-check n/3 replication/always} standby-replay tasks/{0-subvolume/{with-quota} 1-check-counter 2-scrub/no 3-snaps/yes 4-flush/no 5-workunit/suites/blogbench}} | 3 | |
Failure Reason:
Command failed on smithi035 with status 124: "sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph fs subvolume create cephfs sv_0 ''" |
||||||||||||||
fail | 7075425 | 2022-10-20 10:32:50 | 2022-10-20 16:06:03 | 2022-10-20 16:41:49 | 0:35:46 | 0:22:49 | 0:12:57 | smithi | main | centos | 8.stream | fs/upgrade/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} fail_fs/no overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn syntax} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-inline/no 4-verify} 2-client 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} | 2 | |
Failure Reason:
Command failed on smithi103 with status 22: "sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v16.2.4 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid b608a112-5094-11ed-8437-001a4aab830c -e sha1=341cd46c8de24705ee92901c06b35c24133f2afa -- bash -c 'ceph orch upgrade start --image quay.ceph.io/ceph-ci/ceph:$sha1 --daemon-types mgr'" |
||||||||||||||
dead | 7075426 | 2022-10-20 10:32:51 | 2022-10-20 16:09:47 | 2022-10-21 04:22:27 | 12:12:40 | smithi | main | centos | 8.stream | fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{centos_8} mount/fuse objectstore/bluestore-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/journal-repair} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 7075427 | 2022-10-20 10:32:52 | 2022-10-20 16:12:08 | 2022-10-20 16:52:04 | 0:39:56 | 0:27:02 | 0:12:54 | smithi | main | centos | 8.stream | fs/workload/{0-centos_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/legacy wsync/no} objectstore-ec/bluestore-bitmap omap_limit/10 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{export-check n/5 replication/always} standby-replay tasks/{0-subvolume/{with-namespace-isolated-and-quota} 1-check-counter 2-scrub/no 3-snaps/yes 4-flush/no 5-workunit/suites/ffsb}} | 3 | |
Failure Reason:
Command failed on smithi006 with status 124: "sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph fs subvolume create cephfs sv_0 ''" |
||||||||||||||
fail | 7075428 | 2022-10-20 10:32:53 | 2022-10-20 16:15:19 | 2022-10-20 16:49:50 | 0:34:31 | 0:22:21 | 0:12:10 | smithi | main | centos | 8.stream | fs/workload/{0-centos_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/stock/{centos_latest k-stock} ms-die-on-skipped}} ms_mode/crc wsync/yes} objectstore-ec/bluestore-ec-root omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{export-check n/3 replication/always} standby-replay tasks/{0-subvolume/{with-no-extra-options} 1-check-counter 2-scrub/yes 3-snaps/no 4-flush/yes 5-workunit/suites/fsx}} | 3 | |
Failure Reason:
Command failed on smithi066 with status 124: "sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph fs subvolume create cephfs sv_0 ''" |
||||||||||||||
pass | 7075429 | 2022-10-20 10:32:54 | 2022-10-20 16:19:50 | 2022-10-20 17:32:13 | 1:12:23 | 0:58:00 | 0:14:23 | smithi | main | rhel | 8.6 | fs/multifs/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-2c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} objectstore-ec/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down mon-debug} tasks/failover} | 2 | |
fail | 7075430 | 2022-10-20 10:32:55 | 2022-10-20 16:23:01 | 2022-10-20 17:44:08 | 1:21:07 | 1:11:20 | 0:09:47 | smithi | main | rhel | 8.6 | fs/volumes/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/volumes/{overrides test/basic}} | 2 | |
Failure Reason:
Test failure: test_subvolumegroup_pin_distributed (tasks.cephfs.test_volumes.TestSubvolumeGroups) |
||||||||||||||
pass | 7075431 | 2022-10-20 10:32:56 | 2022-10-20 16:23:01 | 2022-10-20 17:12:57 | 0:49:56 | 0:37:15 | 0:12:41 | smithi | main | centos | 8.stream | fs/workload/{0-centos_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/legacy wsync/no} objectstore-ec/bluestore-bitmap omap_limit/10 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{export-check n/5 replication/default} standby-replay tasks/{0-subvolume/{with-namespace-isolated} 1-check-counter 2-scrub/no 3-snaps/yes 4-flush/no 5-workunit/suites/fsync-tester}} | 3 | |
pass | 7075432 | 2022-10-20 10:32:57 | 2022-10-20 16:25:52 | 2022-10-20 18:00:55 | 1:35:03 | 1:24:30 | 0:10:33 | smithi | main | rhel | 8.6 | fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} msgr-failures/none objectstore-ec/bluestore-comp-ec-root overrides/{frag ignorelist_health ignorelist_wrongly_marked_down prefetch_dirfrags/no prefetch_entire_dirfrags/yes races session_timeout thrashosds-health} ranks/5 tasks/{1-thrash/mds 2-workunit/fs/snaps}} | 2 | |
dead | 7075433 | 2022-10-20 10:32:58 | 2022-10-20 16:26:02 | 2022-10-21 04:41:20 | 12:15:18 | smithi | main | centos | 8.stream | fs/workload/{0-centos_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/fuse objectstore-ec/bluestore-comp omap_limit/10 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{export-check n/3 replication/default} standby-replay tasks/{0-subvolume/{with-no-extra-options} 1-check-counter 2-scrub/no 3-snaps/yes 4-flush/no 5-workunit/suites/iogen}} | 3 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 7075434 | 2022-10-20 10:32:59 | 2022-10-20 16:31:33 | 2022-10-20 17:50:18 | 1:18:45 | 1:01:09 | 0:17:36 | smithi | main | rhel | 8.6 | fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{rhel_8} mount/fuse msgr-failures/osd-mds-delay objectstore-ec/bluestore-comp overrides/{frag ignorelist_health ignorelist_wrongly_marked_down prefetch_dirfrags/no prefetch_entire_dirfrags/no races session_timeout thrashosds-health} ranks/1 tasks/{1-thrash/mon 2-workunit/suites/ffsb}} | 2 | |
fail | 7075435 | 2022-10-20 10:33:00 | 2022-10-20 16:39:15 | 2022-10-20 17:19:13 | 0:39:58 | 0:26:36 | 0:13:22 | smithi | main | centos | 8.stream | fs/workload/{0-centos_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/legacy wsync/no} objectstore-ec/bluestore-bitmap omap_limit/10 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{export-check n/3 replication/always} standby-replay tasks/{0-subvolume/{with-no-extra-options} 1-check-counter 2-scrub/no 3-snaps/yes 4-flush/no 5-workunit/suites/pjd}} | 3 | |
Failure Reason:
Command failed on smithi103 with status 124: "sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph fs subvolume create cephfs sv_0 ''" |
||||||||||||||
pass | 7075436 | 2022-10-20 10:33:01 | 2022-10-20 16:41:56 | 2022-10-20 17:44:54 | 1:02:58 | 0:53:47 | 0:09:11 | smithi | main | rhel | 8.6 | fs/traceless/{begin/{0-install 1-ceph 2-logrotate} clusters/fixed-2-ucephfs conf/{client mds mon osd} distro/{rhel_8} mount/fuse objectstore-ec/bluestore-bitmap overrides/{frag ignorelist_health ignorelist_wrongly_marked_down} tasks/cfuse_workunit_suites_dbench traceless/50pc} | 2 | |
fail | 7075437 | 2022-10-20 10:33:02 | 2022-10-20 16:42:26 | 2022-10-20 17:16:41 | 0:34:15 | 0:22:26 | 0:11:49 | smithi | main | centos | 8.stream | fs/workload/{0-centos_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/stock/{centos_latest k-stock} ms-die-on-skipped}} ms_mode/secure wsync/yes} objectstore-ec/bluestore-comp-ec-root omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{export-check n/5 replication/default} standby-replay tasks/{0-subvolume/{with-namespace-isolated} 1-check-counter 2-scrub/yes 3-snaps/no 4-flush/yes 5-workunit/direct_io}} | 3 | |
Failure Reason:
Command failed on smithi035 with status 124: "sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph fs subvolume create cephfs sv_0 ''" |
||||||||||||||
dead | 7075438 | 2022-10-20 10:33:03 | 2022-10-20 16:47:37 | 2022-10-21 05:00:36 | 12:12:59 | smithi | main | centos | 8.stream | fs/workload/{0-centos_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/fuse objectstore-ec/bluestore-comp omap_limit/10 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{export-check n/5 replication/always} standby-replay tasks/{0-subvolume/{no-subvolume} 1-check-counter 2-scrub/no 3-snaps/yes 4-flush/no 5-workunit/fs/misc}} | 3 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
dead | 7075439 | 2022-10-20 10:33:04 | 2022-10-20 16:49:58 | 2022-10-21 05:01:11 | 12:11:13 | smithi | main | rhel | 8.6 | fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{rhel_8} mount/fuse objectstore/bluestore-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/openfiletable} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 7075440 | 2022-10-20 10:33:05 | 2022-10-20 16:49:58 | 2022-10-20 17:24:57 | 0:34:59 | 0:22:48 | 0:12:11 | smithi | main | centos | 8.stream | fs/upgrade/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} fail_fs/no overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn syntax} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-inline/no 4-verify} 2-client 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} | 2 | |
Failure Reason:
Command failed on smithi059 with status 22: "sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v16.2.4 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d06a8452-509a-11ed-8437-001a4aab830c -e sha1=341cd46c8de24705ee92901c06b35c24133f2afa -- bash -c 'ceph orch upgrade start --image quay.ceph.io/ceph-ci/ceph:$sha1 --daemon-types mgr'" |
||||||||||||||
dead | 7075441 | 2022-10-20 10:33:06 | 2022-10-20 16:52:09 | 2022-10-21 05:06:01 | 12:13:52 | smithi | main | rhel | 8.6 | fs/thrash/multifs/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-2c-client conf/{client mds mon osd} distro/{rhel_8} mount/fuse msgr-failures/osd-mds-delay objectstore/bluestore-bitmap overrides/{frag ignorelist_health ignorelist_wrongly_marked_down multifs session_timeout thrashosds-health} tasks/{1-thrash/mds 2-workunit/cfuse_workunit_suites_pjd}} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 7075442 | 2022-10-20 10:33:07 | 2022-10-20 16:52:39 | 2022-10-20 17:47:46 | 0:55:07 | 0:42:13 | 0:12:54 | smithi | main | centos | 8.stream | fs/workload/{0-centos_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/stock/{centos_latest k-stock} ms-die-on-skipped}} ms_mode/secure wsync/yes} objectstore-ec/bluestore-comp-ec-root omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{export-check n/3 replication/always} standby-replay tasks/{0-subvolume/{no-subvolume} 1-check-counter 2-scrub/yes 3-snaps/no 4-flush/yes 5-workunit/suites/blogbench}} | 3 | |
Failure Reason:
error during scrub thrashing: rank damage found: {'backtrace'} |
||||||||||||||
fail | 7075443 | 2022-10-20 10:33:08 | 2022-10-20 16:58:51 | 2022-10-20 17:41:54 | 0:43:03 | 0:29:10 | 0:13:53 | smithi | main | centos | 8.stream | fs/workload/{0-centos_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/fuse objectstore-ec/bluestore-comp omap_limit/10 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{export-check n/5 replication/default} standby-replay tasks/{0-subvolume/{with-no-extra-options} 1-check-counter 2-scrub/no 3-snaps/yes 4-flush/no 5-workunit/suites/dbench}} | 3 | |
Failure Reason:
Command failed on smithi005 with status 124: "sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph fs subvolume create cephfs sv_0 ''" |
||||||||||||||
fail | 7075444 | 2022-10-20 10:33:09 | 2022-10-20 17:00:21 | 2022-10-20 17:42:18 | 0:41:57 | 0:22:33 | 0:19:24 | smithi | main | centos | 8.stream | fs/workload/{0-centos_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/stock/{centos_latest k-stock} ms-die-on-skipped}} ms_mode/crc wsync/yes} objectstore-ec/bluestore-ec-root omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{export-check n/5 replication/always} standby-replay tasks/{0-subvolume/{with-namespace-isolated-and-quota} 1-check-counter 2-scrub/yes 3-snaps/no 4-flush/yes 5-workunit/suites/ffsb}} | 3 | |
Failure Reason:
Command failed on smithi045 with status 124: "sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph fs subvolume create cephfs sv_0 ''" |
||||||||||||||
pass | 7075445 | 2022-10-20 10:33:10 | 2022-10-20 17:13:03 | 2022-10-20 17:44:19 | 0:31:16 | 0:20:15 | 0:11:01 | smithi | main | rhel | 8.6 | fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-bitmap overrides/{frag ignorelist_health ignorelist_wrongly_marked_down prefetch_dirfrags/yes prefetch_entire_dirfrags/no races session_timeout thrashosds-health} ranks/5 tasks/{1-thrash/mds 2-workunit/fs/trivial_sync}} | 2 | |
fail | 7075446 | 2022-10-20 10:33:11 | 2022-10-20 17:13:04 | 2022-10-20 17:51:03 | 0:37:59 | 0:26:06 | 0:11:53 | smithi | main | centos | 8.stream | fs/workload/{0-centos_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/legacy wsync/no} objectstore-ec/bluestore-bitmap omap_limit/10 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{export-check n/3 replication/default} standby-replay tasks/{0-subvolume/{with-no-extra-options} 1-check-counter 2-scrub/no 3-snaps/yes 4-flush/no 5-workunit/fs/norstats}} | 3 | |
Failure Reason:
Command failed on smithi079 with status 124: "sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph fs subvolume create cephfs sv_0 ''" |
||||||||||||||
pass | 7075447 | 2022-10-20 10:33:11 | 2022-10-20 17:14:04 | 2022-10-20 19:16:05 | 2:02:01 | 1:49:08 | 0:12:53 | smithi | main | ubuntu | 20.04 | fs/snaps/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/fuse objectstore-ec/bluestore-comp-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down} tasks/workunit/snaps} | 2 | |
pass | 7075448 | 2022-10-20 10:33:12 | 2022-10-20 17:16:45 | 2022-10-20 18:39:52 | 1:23:07 | 1:12:00 | 0:11:07 | smithi | main | centos | 8.stream | fs/volumes/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{centos_8} mount/fuse objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/volumes/{overrides test/clone}} | 2 | |
fail | 7075449 | 2022-10-20 10:33:13 | 2022-10-20 17:16:45 | 2022-10-20 18:00:35 | 0:43:50 | 0:29:46 | 0:14:04 | smithi | main | centos | 8.stream | fs/workload/{0-centos_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/fuse objectstore-ec/bluestore-comp omap_limit/10 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{export-check n/3 replication/always} standby-replay tasks/{0-subvolume/{with-quota} 1-check-counter 2-scrub/no 3-snaps/yes 4-flush/no 5-workunit/suites/fsx}} | 3 | |
Failure Reason:
Fuse mount failed to populate/sys/ after 31 seconds |
||||||||||||||
dead | 7075450 | 2022-10-20 10:33:14 | 2022-10-20 17:19:16 | 2022-10-21 05:29:14 | 12:09:58 | smithi | main | rhel | 8.6 | fs/verify/{begin/{0-install 1-ceph 2-logrotate} centos_8 clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{rhel_8} mount/fuse objectstore-ec/bluestore-comp-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down mon-debug session_timeout} ranks/5 tasks/dbench validater/valgrind} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 7075451 | 2022-10-20 10:33:15 | 2022-10-20 17:19:16 | 2022-10-20 17:57:50 | 0:38:34 | 0:24:04 | 0:14:30 | smithi | main | rhel | 8.6 | fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/sessionmap} | 2 | |
pass | 7075452 | 2022-10-20 10:33:16 | 2022-10-20 17:25:08 | 2022-10-20 18:21:01 | 0:55:53 | 0:37:10 | 0:18:43 | smithi | main | centos | 8.stream | fs/workload/{0-centos_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/legacy wsync/no} objectstore-ec/bluestore-bitmap omap_limit/10 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{export-check n/5 replication/always} standby-replay tasks/{0-subvolume/{no-subvolume} 1-check-counter 2-scrub/no 3-snaps/yes 4-flush/no 5-workunit/fs/test_o_trunc}} | 3 | |
fail | 7075453 | 2022-10-20 10:33:17 | 2022-10-20 17:33:29 | 2022-10-20 18:51:19 | 1:17:50 | 0:57:49 | 0:20:01 | smithi | main | rhel | 8.6 | fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{rhel_8} mount/fuse objectstore/bluestore-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/snap-schedule} | 2 | |
Failure Reason:
"1666289490.736036 mon.a (mon.0) 426 : cluster [WRN] Health check failed: Reduced data availability: 2 pgs peering (PG_AVAILABILITY)" in cluster log |
||||||||||||||
fail | 7075454 | 2022-10-20 10:33:18 | 2022-10-20 17:42:00 | 2022-10-20 18:11:33 | 0:29:33 | 0:22:21 | 0:07:12 | smithi | main | centos | 8.stream | fs/workload/{0-centos_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/stock/{centos_latest k-stock} ms-die-on-skipped}} ms_mode/secure wsync/yes} objectstore-ec/bluestore-comp-ec-root omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{export-check n/3 replication/default} standby-replay tasks/{0-subvolume/{with-namespace-isolated} 1-check-counter 2-scrub/yes 3-snaps/no 4-flush/yes 5-workunit/suites/iogen}} | 3 | |
Failure Reason:
Command failed on smithi045 with status 124: "sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph fs subvolume create cephfs sv_0 ''" |
||||||||||||||
pass | 7075455 | 2022-10-20 10:33:19 | 2022-10-20 17:42:21 | 2022-10-20 18:19:19 | 0:36:58 | 0:25:16 | 0:11:42 | smithi | main | rhel | 8.6 | fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{rhel_8} mount/fuse msgr-failures/none objectstore-ec/bluestore-comp-ec-root overrides/{frag ignorelist_health ignorelist_wrongly_marked_down prefetch_dirfrags/no prefetch_entire_dirfrags/yes races session_timeout thrashosds-health} ranks/1 tasks/{1-thrash/mon 2-workunit/suites/iozone}} | 2 | |
fail | 7075456 | 2022-10-20 10:33:20 | 2022-10-20 17:44:12 | 2022-10-20 18:17:29 | 0:33:17 | 0:25:29 | 0:07:48 | smithi | main | centos | 8.stream | fs/workload/{0-centos_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/stock/{centos_latest k-stock} ms-die-on-skipped}} ms_mode/crc wsync/yes} objectstore-ec/bluestore-ec-root omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{export-check n/3 replication/always} standby-replay tasks/{0-subvolume/{with-namespace-isolated} 1-check-counter 2-scrub/yes 3-snaps/no 4-flush/yes 5-workunit/suites/pjd}} | 3 | |
Failure Reason:
Command failed (workunit test suites/pjd.sh) on smithi005 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=341cd46c8de24705ee92901c06b35c24133f2afa TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/pjd.sh' |
||||||||||||||
fail | 7075457 | 2022-10-20 10:33:21 | 2022-10-20 17:44:22 | 2022-10-20 19:06:52 | 1:22:30 | 1:11:43 | 0:10:47 | smithi | main | centos | 8.stream | fs/workload/{0-centos_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/stock/{centos_latest k-stock} ms-die-on-skipped}} ms_mode/secure wsync/yes} objectstore-ec/bluestore-comp-ec-root omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{export-check n/5 replication/always} standby-replay tasks/{0-subvolume/{with-namespace-isolated-and-quota} 1-check-counter 2-scrub/no 3-snaps/yes 4-flush/yes 5-workunit/fs/misc}} | 3 | |
Failure Reason:
Command failed on smithi084 with status 110: "sudo TESTDIR=/home/ubuntu/cephtest bash -c 'ceph fs snap-schedule status --fs=cephfs --path=/'" |
||||||||||||||
pass | 7075458 | 2022-10-20 10:33:22 | 2022-10-20 17:47:53 | 2022-10-20 18:21:06 | 0:33:13 | 0:22:22 | 0:10:51 | smithi | main | rhel | 8.6 | fs/permission/{begin/{0-install 1-ceph 2-logrotate} clusters/fixed-2-ucephfs conf/{client mds mon osd} distro/{rhel_8} mount/fuse objectstore-ec/bluestore-comp-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down} tasks/cfuse_workunit_misc} | 2 | |
dead | 7075459 | 2022-10-20 10:33:23 | 2022-10-20 17:47:53 | 2022-10-21 06:00:25 | 12:12:32 | smithi | main | rhel | 8.6 | fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{rhel_8} mount/fuse objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/strays} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 7075460 | 2022-10-20 10:33:24 | 2022-10-20 18:18:02 | 956 | smithi | main | centos | 8.stream | fs/workload/{0-centos_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/fuse objectstore-ec/bluestore-comp omap_limit/10 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{export-check n/3 replication/default} standby-replay tasks/{0-subvolume/{no-subvolume} 1-check-counter 2-scrub/yes 3-snaps/no 4-flush/no 5-workunit/kernel_untar_build}} | 3 | ||||
Failure Reason:
Command failed on smithi106 with status 5: 'sudo systemctl stop ceph-f11e1b8e-50a2-11ed-8437-001a4aab830c@mon.b' |
||||||||||||||
fail | 7075461 | 2022-10-20 10:33:25 | 2022-10-20 17:51:04 | 2022-10-20 21:11:29 | 3:20:25 | 3:03:33 | 0:16:52 | smithi | main | ubuntu | 20.04 | fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} msgr-failures/none objectstore-ec/bluestore-ec-root overrides/{frag ignorelist_health ignorelist_wrongly_marked_down prefetch_dirfrags/yes prefetch_entire_dirfrags/yes races session_timeout thrashosds-health} ranks/5 tasks/{1-thrash/osd 2-workunit/fs/snaps}} | 2 | |
Failure Reason:
"1666298482.4140399 osd.6 (osd.6) 21 : cluster [ERR] 5.8s0 deep-scrub : stat mismatch, got 0/6 objects, 0/0 clones, 0/6 dirty, 0/0 omap, 0/0 pinned, 0/0 hit_set_archive, 0/0 whiteouts, 0/97434 bytes, 0/0 manifest objects, 0/0 hit_set_archive bytes." in cluster log |
||||||||||||||
fail | 7075462 | 2022-10-20 10:33:26 | 2022-10-20 17:57:56 | 2022-10-20 18:33:53 | 0:35:57 | 0:22:07 | 0:13:50 | smithi | main | centos | 8.stream | fs/upgrade/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} fail_fs/no overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn syntax} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-inline/no 4-verify} 2-client 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} | 2 | |
Failure Reason:
Command failed on smithi111 with status 22: "sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v16.2.4 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 68ba748e-50a4-11ed-8437-001a4aab830c -e sha1=341cd46c8de24705ee92901c06b35c24133f2afa -- bash -c 'ceph orch upgrade start --image quay.ceph.io/ceph-ci/ceph:$sha1 --daemon-types mgr'" |
||||||||||||||
pass | 7075463 | 2022-10-20 10:33:27 | 2022-10-20 18:00:36 | 2022-10-20 18:54:15 | 0:53:39 | 0:43:52 | 0:09:47 | smithi | main | centos | 8.stream | fs/workload/{0-centos_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/legacy wsync/no} objectstore-ec/bluestore-bitmap omap_limit/10 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{export-check n/3 replication/always} standby-replay tasks/{0-subvolume/{with-quota} 1-check-counter 2-scrub/yes 3-snaps/no 4-flush/no 5-workunit/suites/blogbench}} | 3 | |
fail | 7075464 | 2022-10-20 10:33:28 | 2022-10-20 18:00:57 | 2022-10-20 20:28:48 | 2:27:51 | 2:10:45 | 0:17:06 | smithi | main | centos | 8.stream | fs/workload/{0-centos_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/stock/{centos_latest k-stock} ms-die-on-skipped}} ms_mode/secure wsync/yes} objectstore-ec/bluestore-comp-ec-root omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{export-check n/5 replication/default} standby-replay tasks/{0-subvolume/{with-namespace-isolated-and-quota} 1-check-counter 2-scrub/no 3-snaps/yes 4-flush/yes 5-workunit/suites/dbench}} | 3 | |
Failure Reason:
Command failed on smithi045 with status 110: "sudo TESTDIR=/home/ubuntu/cephtest bash -c 'ceph fs snap-schedule status --fs=cephfs --path=/'" |
||||||||||||||
pass | 7075465 | 2022-10-20 10:33:29 | 2022-10-20 18:11:39 | 2022-10-20 19:02:34 | 0:50:55 | 0:34:23 | 0:16:32 | smithi | main | rhel | 8.6 | fs/thrash/multifs/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-2c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore/bluestore-bitmap overrides/{frag ignorelist_health ignorelist_wrongly_marked_down multifs session_timeout thrashosds-health} tasks/{1-thrash/mds 2-workunit/ffsb}} | 2 | |
pass | 7075466 | 2022-10-20 10:33:30 | 2022-10-20 18:17:40 | 2022-10-20 18:47:08 | 0:29:28 | 0:19:48 | 0:09:40 | smithi | main | rhel | 8.6 | fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/truncate_delay} | 2 | |
fail | 7075467 | 2022-10-20 10:33:30 | 2022-10-20 18:18:10 | 2022-10-20 19:08:38 | 0:50:28 | 0:38:18 | 0:12:10 | smithi | main | centos | 8.stream | fs/workload/{0-centos_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/fuse objectstore-ec/bluestore-comp omap_limit/10 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{export-check n/5 replication/always} standby-replay tasks/{0-subvolume/{with-namespace-isolated-and-quota} 1-check-counter 2-scrub/yes 3-snaps/no 4-flush/no 5-workunit/suites/ffsb}} | 3 | |
Failure Reason:
Fuse mount failed to populate/sys/ after 31 seconds |
||||||||||||||
fail | 7075468 | 2022-10-20 10:33:32 | 2022-10-20 18:19:21 | 2022-10-20 18:50:35 | 0:31:14 | 0:22:23 | 0:08:51 | smithi | main | centos | 8.stream | fs/workload/{0-centos_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/stock/{centos_latest k-stock} ms-die-on-skipped}} ms_mode/crc wsync/yes} objectstore-ec/bluestore-ec-root omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{export-check n/3 replication/default} standby-replay tasks/{0-subvolume/{with-namespace-isolated} 1-check-counter 2-scrub/no 3-snaps/yes 4-flush/yes 5-workunit/fs/norstats}} | 3 | |
Failure Reason:
Command failed on smithi032 with status 124: "sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph fs subvolume create cephfs sv_0 ''" |
||||||||||||||
fail | 7075469 | 2022-10-20 10:33:32 | 2022-10-20 18:21:12 | 2022-10-20 18:50:18 | 0:29:06 | 0:22:01 | 0:07:05 | smithi | main | centos | 8.stream | fs/workload/{0-centos_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/stock/{centos_latest k-stock} ms-die-on-skipped}} ms_mode/secure wsync/yes} objectstore-ec/bluestore-comp-ec-root omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{export-check n/3 replication/always} standby-replay tasks/{0-subvolume/{with-quota} 1-check-counter 2-scrub/no 3-snaps/yes 4-flush/yes 5-workunit/suites/fsx}} | 3 | |
Failure Reason:
Command failed on smithi005 with status 124: "sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph fs subvolume create cephfs sv_0 ''" |