User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Pass | Fail | Dead |
---|---|---|---|---|---|---|---|---|---|---|---|
yuriw | 2023-12-01 15:11:43 | 2023-12-01 15:12:19 | 2023-12-02 03:32:15 | 12:19:56 | fs | reef-release | smithi | a6f8299 | 8 | 12 | 1 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
fail | 7473755 | 2023-12-01 15:11:55 | 2023-12-01 15:12:16 | 2023-12-01 16:00:04 | 0:47:48 | 0:34:19 | 0:13:29 | smithi | wip-package-queries | ubuntu | 20.04 | fs/libcephfs/{begin/{0-install 1-ceph 2-logrotate} clusters/1-mds-1-client-coloc conf/{client mds mon osd} distro/{ubuntu_20.04} objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/libcephfs/{frag test}} | 2 | |
Failure Reason:
Command failed (workunit test libcephfs/test.sh) on smithi082 with status 139: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=a6f82997ac57b70a895455dfed6256360a1e4c32 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/libcephfs/test.sh' |
||||||||||||||
pass | 7473757 | 2023-12-01 15:11:56 | 2023-12-01 15:12:16 | 2023-12-01 16:32:28 | 1:20:12 | 1:05:37 | 0:14:35 | smithi | wip-package-queries | ubuntu | 22.04 | fs/multiclient/{begin/{0-install 1-ceph 2-logrotate} clusters/1-mds-2-client conf/{client mds mon osd} distros/ubuntu_latest mount/fuse objectstore-ec/bluestore-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down} tasks/cephfs_misc_tests} | 4 | |
fail | 7473759 | 2023-12-01 15:11:57 | 2023-12-01 15:12:17 | 2023-12-01 16:02:47 | 0:50:30 | 0:39:28 | 0:11:02 | smithi | wip-package-queries | ubuntu | 20.04 | fs/verify/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu/{latest overrides}} mount/fuse objectstore-ec/bluestore-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down mon-debug session_timeout} ranks/3 tasks/dbench validater/valgrind} | 2 | |
Failure Reason:
saw valgrind issues |
||||||||||||||
pass | 7473761 | 2023-12-01 15:11:57 | 2023-12-01 15:12:17 | 2023-12-01 15:37:56 | 0:25:39 | 0:19:05 | 0:06:34 | smithi | wip-package-queries | rhel | 8.6 | fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu_20.04} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-comp overrides/{client-shutdown frag ignorelist_health ignorelist_wrongly_marked_down prefetch_dirfrags/no prefetch_entire_dirfrags/yes races session_timeout thrashosds-health} ranks/1 tasks/{1-thrash/mon 2-workunit/fs/trivial_sync}} | 2 | |
pass | 7473763 | 2023-12-01 15:11:58 | 2023-12-01 15:12:18 | 2023-12-01 15:57:21 | 0:45:03 | 0:34:12 | 0:10:51 | smithi | wip-package-queries | rhel | 8.6 | fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} objectstore/bluestore-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/scrub} | 2 | |
pass | 7473765 | 2023-12-01 15:11:59 | 2023-12-01 15:12:18 | 2023-12-01 15:50:44 | 0:38:26 | 0:23:06 | 0:15:20 | smithi | wip-package-queries | centos | 9.stream | fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{centos_latest} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/sessionmap} | 2 | |
fail | 7473767 | 2023-12-01 15:12:00 | 2023-12-01 15:12:19 | 2023-12-01 16:01:39 | 0:49:20 | 0:30:52 | 0:18:28 | smithi | wip-package-queries | ubuntu | 22.04 | fs/mixed-clients/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-2c-client conf/{client mds mon osd} distro/{ubuntu/{latest overrides}} kclient-overrides/{distro/testing/k-testing ms-die-on-skipped} objectstore-ec/bluestore-comp overrides/{ignorelist_health ignorelist_wrongly_marked_down osd-asserts} tasks/kernel_cfuse_workunits_untarbuild_blogbench} | 2 | |
Failure Reason:
Command failed (workunit test kernel_untar_build.sh) on smithi140 with status 2: 'mkdir -p -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && cd -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=a6f82997ac57b70a895455dfed6256360a1e4c32 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="1" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.1 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.1 CEPH_MNT=/home/ubuntu/cephtest/mnt.1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.1/qa/workunits/kernel_untar_build.sh' |
||||||||||||||
pass | 7473769 | 2023-12-01 15:12:00 | 2023-12-01 15:12:19 | 2023-12-01 15:39:00 | 0:26:41 | 0:16:37 | 0:10:04 | smithi | wip-package-queries | centos | 9.stream | fs/traceless/{begin/{0-install 1-ceph 2-logrotate} clusters/fixed-2-ucephfs conf/{client mds mon osd} distro/{centos_latest} mount/fuse objectstore-ec/bluestore-bitmap overrides/{frag ignorelist_health ignorelist_wrongly_marked_down} tasks/cfuse_workunit_suites_fsstress traceless/50pc} | 2 | |
pass | 7473771 | 2023-12-01 15:12:01 | 2023-12-01 15:12:19 | 2023-12-01 15:57:15 | 0:44:56 | 0:27:53 | 0:17:03 | smithi | wip-package-queries | ubuntu | 20.04 | fs/thrash/multifs/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-2c-client conf/{client mds mon osd} distro/{ubuntu_20.04} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} msgr-failures/none objectstore/bluestore-bitmap overrides/{client-shutdown frag ignorelist_health ignorelist_wrongly_marked_down multifs session_timeout thrashosds-health} tasks/{1-thrash/mds 2-workunit/iozone}} | 2 | |
fail | 7473773 | 2023-12-01 15:12:02 | 2023-12-01 15:12:20 | 2023-12-01 16:01:34 | 0:49:14 | 0:37:16 | 0:11:58 | smithi | wip-package-queries | ubuntu | 20.04 | fs/verify/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu/{latest overrides}} mount/fuse objectstore-ec/bluestore-comp-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down mon-debug session_timeout} ranks/1 tasks/dbench validater/valgrind} | 2 | |
Failure Reason:
saw valgrind issues |
||||||||||||||
fail | 7473775 | 2023-12-01 15:12:03 | 2023-12-01 15:12:20 | 2023-12-01 16:14:58 | 1:02:38 | 0:53:38 | 0:09:00 | smithi | wip-package-queries | rhel | 8.6 | fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} ms_mode/crc wsync/yes} objectstore-ec/bluestore-comp omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{export-check n/5 replication/default} standby-replay tasks/{0-subvolume/{with-quota} 1-check-counter 2-scrub/yes 3-snaps/no 4-flush/yes 5-workunit/postgres}} | 3 | |
Failure Reason:
error during scrub thrashing: reached maximum tries (31) after waiting for 900 seconds |
||||||||||||||
pass | 7473777 | 2023-12-01 15:12:03 | 2023-12-01 15:12:20 | 2023-12-01 15:49:14 | 0:36:54 | 0:24:28 | 0:12:26 | smithi | wip-package-queries | rhel | 8.6 | fs/32bits/{begin/{0-install 1-ceph 2-logrotate} clusters/fixed-2-ucephfs conf/{client mds mon osd} distro/{rhel_8} mount/fuse objectstore-ec/bluestore-bitmap overrides/{faked-ino ignorelist_health ignorelist_wrongly_marked_down} tasks/cfuse_workunit_suites_fsstress} | 2 | |
dead | 7473779 | 2023-12-01 15:12:04 | 2023-12-01 15:12:21 | 2023-12-02 03:32:15 | 12:19:54 | smithi | wip-package-queries | centos | 9.stream | fs/fscrypt/{begin/{0-install 1-ceph 2-logrotate} bluestore-bitmap clusters/1-mds-1-client conf/{client mds mon osd} distro/{centos_latest} mount/kclient/{mount-syntax/v1 mount overrides/{distro/testing/k-testing}} overrides/{ignorelist_health ignorelist_health_more ignorelist_wrongly_marked_down pg-warn} tasks/fscrypt-common} | 3 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 7473781 | 2023-12-01 15:12:05 | 2023-12-01 15:12:21 | 2023-12-01 15:37:48 | 0:25:27 | 0:15:18 | 0:10:09 | smithi | wip-package-queries | centos | 9.stream | fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{centos_latest} mount/fuse objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/client-limits} | 2 | |
Failure Reason:
Test failure: test_client_blocklisted_oldest_tid (tasks.cephfs.test_client_limits.TestClientLimits) |
||||||||||||||
fail | 7473783 | 2023-12-01 15:12:06 | 2023-12-01 15:12:21 | 2023-12-01 17:36:00 | 2:23:39 | 2:09:40 | 0:13:59 | smithi | wip-package-queries | ubuntu | 22.04 | fs/mirror/{begin/{0-install 1-ceph 2-logrotate} cephfs-mirror/one-per-cluster clients/{mirror} cluster/{1-node} mount/fuse objectstore/bluestore-bitmap overrides/{whitelist_health} supported-random-distros$/{ubuntu_latest} tasks/mirror} | 1 | |
Failure Reason:
"2023-12-01T15:41:07.644747+0000 mon.a (mon.0) 116 : cluster [WRN] Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)" in cluster log |
||||||||||||||
fail | 7473785 | 2023-12-01 15:12:06 | 2023-12-01 15:12:22 | 2023-12-01 15:51:14 | 0:38:52 | 0:25:32 | 0:13:20 | smithi | wip-package-queries | centos | 9.stream | fs/valgrind/{begin/{0-install 1-ceph 2-logrotate} centos_latest debug mirror/{cephfs-mirror/one-per-cluster clients/mirror cluster/1-node mount/fuse overrides/whitelist_health tasks/mirror}} | 1 | |
Failure Reason:
Test failure: test_add_ancestor_and_child_directory (tasks.cephfs.test_mirroring.TestMirroring) |
||||||||||||||
fail | 7473787 | 2023-12-01 15:12:07 | 2023-12-01 15:12:22 | 2023-12-01 16:35:00 | 1:22:38 | 1:10:55 | 0:11:43 | smithi | wip-package-queries | rhel | 8.6 | fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} ms_mode/secure wsync/yes} objectstore-ec/bluestore-bitmap omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{export-check n/3 replication/default} standby-replay tasks/{0-subvolume/{with-no-extra-options} 1-check-counter 2-scrub/yes 3-snaps/no 4-flush/yes 5-workunit/suites/dbench}} | 3 | |
Failure Reason:
error during scrub thrashing: rank damage found: {'backtrace'} |
||||||||||||||
pass | 7473789 | 2023-12-01 15:12:08 | 2023-12-01 15:12:22 | 2023-12-01 15:47:45 | 0:35:23 | 0:25:14 | 0:10:09 | smithi | wip-package-queries | rhel | 8.6 | fs/thrash/multifs/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-2c-client conf/{client mds mon osd} distro/{ubuntu_20.04} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore/bluestore-bitmap overrides/{client-shutdown frag ignorelist_health ignorelist_wrongly_marked_down multifs session_timeout thrashosds-health} tasks/{1-thrash/mds 2-workunit/cfuse_workunit_suites_pjd}} | 2 | |
fail | 7473791 | 2023-12-01 15:12:09 | 2023-12-01 15:12:23 | 2023-12-01 15:54:51 | 0:42:28 | 0:32:44 | 0:09:44 | smithi | wip-package-queries | rhel | 8.6 | fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} ms_mode/crc wsync/yes} objectstore-ec/bluestore-comp omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{export-check n/3 replication/always} standby-replay tasks/{0-subvolume/{with-namespace-isolated-and-quota} 1-check-counter 2-scrub/yes 3-snaps/no 4-flush/yes 5-workunit/fs/norstats}} | 3 | |
Failure Reason:
error during scrub thrashing: rank damage found: {'backtrace'} |
||||||||||||||
fail | 7473793 | 2023-12-01 15:12:10 | 2023-12-01 15:12:23 | 2023-12-01 15:59:53 | 0:47:30 | 0:36:08 | 0:11:22 | smithi | wip-package-queries | rhel | 8.6 | fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} ms_mode/secure wsync/yes} objectstore-ec/bluestore-bitmap omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{export-check n/5 replication/always} standby-replay tasks/{0-subvolume/{with-namespace-isolated} 1-check-counter 2-scrub/yes 3-snaps/no 4-flush/yes 5-workunit/suites/fsx}} | 3 | |
Failure Reason:
error during scrub thrashing: rank damage found: {'backtrace'} |
||||||||||||||
fail | 7473795 | 2023-12-01 15:12:10 | 2023-12-01 15:12:24 | 2023-12-01 17:23:26 | 2:11:02 | 1:56:50 | 0:14:12 | smithi | wip-package-queries | centos | 9.stream | fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{centos_latest} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/snap-schedule} | 2 | |
Failure Reason:
"2023-12-01T16:02:00.758145+0000 mon.a (mon.0) 1008 : cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log |