Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
fail 7112846 2022-12-12 06:30:36 2022-12-12 06:30:37 2022-12-12 06:59:39 0:29:02 0:16:13 0:12:49 smithi main ubuntu 20.04 fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} objectstore/bluestore-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/backtrace} 2
Failure Reason:

Test failure: test_backtrace (tasks.cephfs.test_backtrace.TestBacktrace)

fail 7112847 2022-12-12 06:30:37 2022-12-12 06:30:37 2022-12-12 07:03:51 0:33:14 0:27:12 0:06:02 smithi main rhel 8.6 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} ms_mode/legacy wsync/yes} objectstore-ec/bluestore-bitmap omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{export-check n/5 replication/default} standby-replay tasks/{0-subvolume/{with-quota} 1-check-counter 2-scrub/yes 3-snaps/no 4-flush/yes 5-workunit/suites/pjd}} 3
Failure Reason:

Command failed (workunit test suites/pjd.sh) on smithi052 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=26225fffe3b7e7ee99dba0391b22e6dd6576e065 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/pjd.sh'

fail 7112848 2022-12-12 06:30:38 2022-12-12 06:30:38 2022-12-12 07:10:51 0:40:13 0:23:08 0:17:05 smithi main centos 8.stream fs/upgrade/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} fail_fs/no overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn syntax} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-inline/yes 4-verify} 2-client 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} 2
Failure Reason:

Command failed on smithi088 with status 22: "sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v16.2.4 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid bd4d0844-79ea-11ed-8441-001a4aab830c -e sha1=3aa21121aa9438156e768ae1ae733b4048246362 -- bash -c 'ceph orch upgrade start --image quay.ceph.io/ceph-ci/ceph:$sha1 --daemon-types mgr'"

fail 7112849 2022-12-12 06:30:39 2022-12-12 06:30:39 2022-12-12 07:31:02 1:00:23 0:44:35 0:15:48 smithi main centos 8.stream fs/mirror-ha/{begin/{0-install 1-ceph 2-logrotate} cephfs-mirror/three-per-cluster clients/{mirror} cluster/{1-node} objectstore/bluestore-bitmap overrides/{whitelist_health} supported-random-distro$/{centos_8} workloads/cephfs-mirror-ha-workunit} 1
Failure Reason:

reached maximum tries (50) after waiting for 300 seconds

fail 7112850 2022-12-12 06:30:40 2022-12-12 06:30:40 2022-12-12 08:25:16 1:54:36 1:44:43 0:09:53 smithi main centos 8.stream fs/valgrind/{begin/{0-install 1-ceph 2-logrotate} centos_latest debug mirror/{cephfs-mirror/one-per-cluster clients/mirror cluster/1-node mount/fuse overrides/whitelist_health tasks/mirror}} 1
Failure Reason:

Test failure: test_cephfs_mirror_restart_sync_on_blocklist (tasks.cephfs.test_mirroring.TestMirroring)

fail 7112851 2022-12-12 06:30:41 2022-12-12 07:13:39 1961 smithi main centos 8.stream fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} objectstore/bluestore-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/data-scan} 2
Failure Reason:

Test failure: test_rebuild_inotable (tasks.cephfs.test_data_scan.TestDataScan)

pass 7112852 2022-12-12 06:30:42 2022-12-12 06:30:42 2022-12-12 07:40:02 1:09:20 0:57:11 0:12:09 smithi main ubuntu 20.04 fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/fuse objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/exports} 2
fail 7112853 2022-12-12 06:30:43 2022-12-12 06:30:43 2022-12-12 13:50:31 7:19:48 7:08:13 0:11:35 smithi main ubuntu 20.04 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} msgr-failures/none objectstore-ec/bluestore-ec-root overrides/{frag ignorelist_health ignorelist_wrongly_marked_down prefetch_dirfrags/yes prefetch_entire_dirfrags/no races session_timeout thrashosds-health} ranks/3 tasks/{1-thrash/osd 2-workunit/fs/snaps}} 2
Failure Reason:

"1670828921.0401533 mds.c (mds.0) 1 : cluster [WRN] client.4760 isn't responding to mclientcaps(revoke), ino 0x10000005e11 pending pAsLsXsFscr issued pAsLsXsFscrb, sent 300.001326 seconds ago" in cluster log

pass 7112854 2022-12-12 06:30:44 2022-12-12 06:30:44 2022-12-12 07:10:33 0:39:49 0:27:00 0:12:49 smithi main rhel 8.6 fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/fragment} 2
pass 7112855 2022-12-12 06:30:45 2022-12-12 06:30:45 2022-12-12 07:25:08 0:54:23 0:47:46 0:06:37 smithi main rhel 8.6 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} ms_mode/legacy wsync/yes} objectstore-ec/bluestore-bitmap omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{export-check n/3 replication/always} standby-replay tasks/{0-subvolume/{with-namespace-isolated-and-quota} 1-check-counter 2-scrub/no 3-snaps/yes 4-flush/yes 5-workunit/postgres}} 3
pass 7112856 2022-12-12 06:30:46 2022-12-12 06:30:46 2022-12-12 07:33:49 1:03:03 0:57:08 0:05:55 smithi main rhel 8.6 fs/volumes/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/volumes/{overrides test/clone}} 2
fail 7112857 2022-12-12 06:30:47 2022-12-12 06:30:47 2022-12-12 07:05:48 0:35:01 0:23:16 0:11:45 smithi main centos 8.stream fs/upgrade/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} fail_fs/no overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn syntax} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-inline/yes 4-verify} 2-client 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} 2
Failure Reason:

Command failed on smithi098 with status 22: "sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v16.2.4 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 0e7fa9b6-79ea-11ed-8441-001a4aab830c -e sha1=3aa21121aa9438156e768ae1ae733b4048246362 -- bash -c 'ceph orch upgrade start --image quay.ceph.io/ceph-ci/ceph:$sha1 --daemon-types mgr'"

fail 7112858 2022-12-12 06:30:48 2022-12-12 06:30:48 2022-12-12 07:10:14 0:39:26 0:33:15 0:06:11 smithi main rhel 8.6 fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} objectstore/bluestore-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/metrics} 2
Failure Reason:

"1670827925.876299 mon.a (mon.0) 931 : cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log

pass 7112859 2022-12-12 06:30:49 2022-12-12 06:30:49 2022-12-12 07:05:36 0:34:47 0:19:51 0:14:56 smithi main rhel 8.6 fs/libcephfs/{begin/{0-install 1-ceph 2-logrotate} clusters/1-mds-1-client-coloc conf/{client mds mon osd} distro/{rhel_8} objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/ino_release_cb} 2
fail 7112860 2022-12-12 06:30:50 2022-12-12 06:30:50 2022-12-12 08:10:09 1:39:19 1:27:57 0:11:22 smithi main ubuntu 20.04 fs/thrash/multifs/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-2c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/fuse msgr-failures/osd-mds-delay objectstore/bluestore-bitmap overrides/{frag ignorelist_health ignorelist_wrongly_marked_down multifs session_timeout thrashosds-health} tasks/{1-thrash/mds 2-workunit/cfuse_workunit_snaptests}} 2
Failure Reason:

Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi047 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=26225fffe3b7e7ee99dba0391b22e6dd6576e065 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/fs/snaps/snaptest-multiple-capsnaps.sh'

dead 7112861 2022-12-12 06:30:50 2022-12-12 06:30:51 2022-12-12 18:44:10 12:13:19 smithi main centos 8.stream fs/verify/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{centos_8} mount/fuse objectstore-ec/bluestore-comp-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down mon-debug session_timeout} ranks/5 tasks/fsstress validater/lockdep} 2
Failure Reason:

hit max job timeout

fail 7112862 2022-12-12 06:30:51 2022-12-12 06:30:52 2022-12-12 07:10:33 0:39:41 0:23:34 0:16:07 smithi main centos 8.stream fs/upgrade/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} fail_fs/no overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn syntax} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-inline/yes 4-verify} 2-client 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} 2
Failure Reason:

Command failed on smithi036 with status 22: "sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v16.2.4 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid ca5055e6-79ea-11ed-8441-001a4aab830c -e sha1=3aa21121aa9438156e768ae1ae733b4048246362 -- bash -c 'ceph orch upgrade start --image quay.ceph.io/ceph-ci/ceph:$sha1 --daemon-types mgr'"

fail 7112863 2022-12-12 06:30:52 2022-12-12 06:30:52 2022-12-12 07:37:51 1:06:59 0:54:35 0:12:24 smithi main ubuntu 20.04 fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/fuse objectstore/bluestore-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/snap-schedule} 2
Failure Reason:

"1670828324.4767218 mon.a (mon.0) 537 : cluster [WRN] Health check failed: Reduced data availability: 2 pgs peering (PG_AVAILABILITY)" in cluster log

dead 7112864 2022-12-12 06:30:53 2022-12-12 06:30:54 2022-12-12 06:51:37 0:20:43 smithi main rhel 8.6 fs/traceless/{begin/{0-install 1-ceph 2-logrotate} clusters/fixed-2-ucephfs conf/{client mds mon osd} distro/{rhel_8} mount/fuse objectstore-ec/bluestore-comp-ec-root overrides/{frag ignorelist_health ignorelist_wrongly_marked_down} tasks/cfuse_workunit_suites_fsstress traceless/50pc} 2
Failure Reason:

Error reimaging machines: reached maximum tries (100) after waiting for 600 seconds

fail 7112865 2022-12-12 06:30:54 2022-12-12 06:30:54 2022-12-12 07:06:01 0:35:07 0:21:38 0:13:29 smithi main ubuntu 20.04 fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/fuse objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/strays} 2
Failure Reason:

Test failure: test_hardlink_reintegration (tasks.cephfs.test_strays.TestStrays)

pass 7112866 2022-12-12 06:30:55 2022-12-12 06:30:55 2022-12-12 07:29:34 0:58:39 0:43:15 0:15:24 smithi main rhel 8.6 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/fuse objectstore-ec/bluestore-comp omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{export-check n/5 replication/default} standby-replay tasks/{0-subvolume/{with-namespace-isolated} 1-check-counter 2-scrub/no 3-snaps/yes 4-flush/yes 5-workunit/suites/fsx}} 3
fail 7112867 2022-12-12 06:30:56 2022-12-12 06:30:56 2022-12-12 07:01:30 0:30:34 0:18:50 0:11:44 smithi main ubuntu 20.04 fs/libcephfs/{begin/{0-install 1-ceph 2-logrotate} clusters/1-mds-1-client-coloc conf/{client mds mon osd} distro/{ubuntu_latest} objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/libcephfs/{frag test}} 2
Failure Reason:

Command failed (workunit test libcephfs/test.sh) on smithi012 with status 139: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=26225fffe3b7e7ee99dba0391b22e6dd6576e065 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/libcephfs/test.sh'

fail 7112868 2022-12-12 06:30:57 2022-12-12 06:30:57 2022-12-12 07:36:54 1:05:57 0:48:51 0:17:06 smithi main rhel 8.6 fs/multifs/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-2c-client conf/{client mds mon osd} distro/{rhel_8} mount/fuse objectstore-ec/bluestore-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down mon-debug} tasks/multifs-auth} 2
Failure Reason:

"1670829830.8946984 mon.a (mon.0) 2288 : cluster [WRN] Health check failed: Reduced data availability: 1 pg inactive, 1 pg peering (PG_AVAILABILITY)" in cluster log

fail 7112869 2022-12-12 06:30:58 2022-12-12 06:30:58 2022-12-12 07:09:17 0:38:19 0:23:11 0:15:08 smithi main centos 8.stream fs/upgrade/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} fail_fs/no overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn syntax} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-inline/yes 4-verify} 2-client 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} 2
Failure Reason:

Command failed on smithi149 with status 22: "sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v16.2.4 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid cc2101ae-79ea-11ed-8441-001a4aab830c -e sha1=3aa21121aa9438156e768ae1ae733b4048246362 -- bash -c 'ceph orch upgrade start --image quay.ceph.io/ceph-ci/ceph:$sha1 --daemon-types mgr'"

fail 7112870 2022-12-12 06:30:59 2022-12-12 06:30:59 2022-12-12 07:07:41 0:36:42 0:25:00 0:11:42 smithi main fs/mixed-clients/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-2c-client conf/{client mds mon osd} kclient-overrides/{distro/testing/k-testing ms-die-on-skipped} objectstore-ec/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down osd-asserts} tasks/kernel_cfuse_workunits_untarbuild_blogbench} 2
Failure Reason:

Command failed (workunit test kernel_untar_build.sh) on smithi133 with status 2: 'mkdir -p -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && cd -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=26225fffe3b7e7ee99dba0391b22e6dd6576e065 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="1" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.1 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.1 CEPH_MNT=/home/ubuntu/cephtest/mnt.1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.1/qa/workunits/kernel_untar_build.sh'

fail 7112871 2022-12-12 06:31:00 2022-12-12 06:31:00 2022-12-12 07:11:01 0:40:01 0:22:50 0:17:11 smithi main centos 8.stream fs/upgrade/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} fail_fs/yes overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn syntax} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-inline/no 4-verify} 2-client 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} 2
Failure Reason:

Command failed on smithi093 with status 22: "sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v16.2.4 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid faa2b162-79ea-11ed-8441-001a4aab830c -e sha1=3aa21121aa9438156e768ae1ae733b4048246362 -- bash -c 'ceph orch upgrade start --image quay.ceph.io/ceph-ci/ceph:$sha1 --daemon-types mgr'"

fail 7112872 2022-12-12 06:31:01 2022-12-12 06:31:02 2022-12-12 07:18:45 0:47:43 0:33:23 0:14:20 smithi main rhel 8.6 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/crc wsync/no} objectstore-ec/bluestore-ec-root omap_limit/10 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{export-check n/5 replication/default} standby-replay tasks/{0-subvolume/{with-namespace-isolated-and-quota} 1-check-counter 2-scrub/yes 3-snaps/no 4-flush/no 5-workunit/suites/pjd}} 3
Failure Reason:

Command failed (workunit test suites/pjd.sh) on smithi039 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=26225fffe3b7e7ee99dba0391b22e6dd6576e065 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/pjd.sh'

pass 7112873 2022-12-12 06:31:02 2022-12-12 06:31:03 2022-12-12 07:08:34 0:37:31 0:22:32 0:14:59 smithi main centos 8.stream fs/permission/{begin/{0-install 1-ceph 2-logrotate} clusters/fixed-2-ucephfs conf/{client mds mon osd} distro/{centos_8} mount/fuse objectstore-ec/bluestore-comp overrides/{ignorelist_health ignorelist_wrongly_marked_down} tasks/cfuse_workunit_suites_pjd} 2
fail 7112874 2022-12-12 06:31:03 2022-12-12 06:31:04 2022-12-12 07:07:31 0:36:27 0:20:32 0:15:55 smithi main centos 8.stream fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{centos_8} mount/fuse objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/client-limits} 2
Failure Reason:

Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)

fail 7112875 2022-12-12 06:31:04 2022-12-12 06:31:05 2022-12-12 07:15:09 0:44:04 0:27:19 0:16:45 smithi main rhel 8.6 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} msgr-failures/none objectstore-ec/bluestore-ec-root overrides/{frag ignorelist_health ignorelist_wrongly_marked_down prefetch_dirfrags/yes prefetch_entire_dirfrags/no races session_timeout thrashosds-health} ranks/3 tasks/{1-thrash/mds 2-workunit/fs/snaps}} 2
Failure Reason:

Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi142 with status 128: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=26225fffe3b7e7ee99dba0391b22e6dd6576e065 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/fs/snaps/snaptest-git-ceph.sh'

pass 7112876 2022-12-12 06:31:06 2022-12-12 06:31:06 2022-12-12 07:26:45 0:55:39 0:41:17 0:14:22 smithi main centos 8.stream fs/upgrade/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} fail_fs/no overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn syntax} roles tasks/{0-from/pacific 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-inline/yes 4-verify} 2-client 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} 2