Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
pass 7122930 2022-12-21 03:10:44 2022-12-21 03:10:46 2022-12-21 03:42:27 0:31:41 0:21:53 0:09:48 smithi main ubuntu 20.04 fs/traceless/{begin/{0-install 1-ceph 2-logrotate} clusters/fixed-2-ucephfs conf/{client mds mon osd} distro/{ubuntu_latest} mount/fuse objectstore-ec/bluestore-bitmap overrides/{frag ignorelist_health ignorelist_wrongly_marked_down} tasks/cfuse_workunit_suites_fsstress traceless/50pc} 2
dead 7122931 2022-12-21 03:10:45 2022-12-21 03:10:46 2022-12-21 03:55:57 0:45:11 smithi main rhel 8.6 fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} objectstore/bluestore-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/scrub} 2
Failure Reason:

Error reimaging machines: reached maximum tries (180) after waiting for 2700 seconds

pass 7122932 2022-12-21 03:10:46 2022-12-21 03:10:46 2022-12-21 04:03:35 0:52:49 0:28:36 0:24:13 smithi main centos 8.stream fs/upgrade/featureful_client/upgraded_client/{bluestore-bitmap centos_latest clusters/1-mds-2-client-micro conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down multimds/yes pg-warn} tasks/{0-octopus 1-client 2-upgrade 3-client-upgrade 4-compat_client 5-client-sanity}} 3
fail 7122933 2022-12-21 03:10:47 2022-12-21 03:10:47 2022-12-21 03:43:09 0:32:22 0:23:37 0:08:45 smithi main centos 8.stream fs/upgrade/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} fail_fs/yes overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn syntax} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-inline/no 4-verify} 2-client 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} 2
Failure Reason:

Command failed on smithi026 with status 22: "sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v16.2.4 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 6ddf9b80-80e0-11ed-97a3-001a4aab830c -e sha1=47f77a750cb9b297b7b8ab2c8bc3d22102dbeba8 -- bash -c 'ceph orch upgrade start --image quay.ceph.io/ceph-ci/ceph:$sha1 --daemon-types mgr'"

pass 7122934 2022-12-21 03:10:48 2022-12-21 03:10:48 2022-12-21 03:56:45 0:45:57 0:33:54 0:12:03 smithi main centos 8.stream fs/upgrade/featureful_client/old_client/{bluestore-bitmap centos_latest clusters/1-mds-2-client-micro conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down multimds/no pg-warn} tasks/{0-octopus 1-client 2-upgrade 3-compat_client/quincy}} 3
pass 7122935 2022-12-21 03:10:49 2022-12-21 03:10:49 2022-12-21 03:50:05 0:39:16 0:25:10 0:14:06 smithi main rhel 8.6 fs/thrash/multifs/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-2c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} msgr-failures/none objectstore/bluestore-bitmap overrides/{frag ignorelist_health ignorelist_wrongly_marked_down multifs session_timeout thrashosds-health} tasks/{1-thrash/mds 2-workunit/iozone}} 2
fail 7122936 2022-12-21 03:10:50 2022-12-21 03:10:50 2022-12-21 03:47:22 0:36:32 0:21:39 0:14:53 smithi main ubuntu 20.04 fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/fuse objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/strays} 2
Failure Reason:

Test failure: test_hardlink_reintegration (tasks.cephfs.test_strays.TestStrays)

pass 7122937 2022-12-21 03:10:51 2022-12-21 03:10:51 2022-12-21 03:44:17 0:33:26 0:18:42 0:14:44 smithi main rhel 8.6 fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} objectstore/bluestore-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/test_journal_migration} 2
pass 7122938 2022-12-21 03:10:52 2022-12-21 03:10:52 2022-12-21 04:15:03 1:04:11 0:48:29 0:15:42 smithi main rhel 8.6 fs/multifs/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-2c-client conf/{client mds mon osd} distro/{rhel_8} mount/fuse objectstore-ec/bluestore-comp-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down mon-debug} tasks/multifs-auth} 2
fail 7122939 2022-12-21 03:10:53 2022-12-21 03:10:53 2022-12-21 04:04:43 0:53:50 0:29:59 0:23:51 smithi main centos 8.stream fs/thrash/multifs/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-2c-client conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore/bluestore-bitmap overrides/{frag ignorelist_health ignorelist_wrongly_marked_down multifs session_timeout thrashosds-health} tasks/{1-thrash/mds 2-workunit/cfuse_workunit_snaptests}} 2
Failure Reason:

Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi080 with status 128: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=47f77a750cb9b297b7b8ab2c8bc3d22102dbeba8 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/fs/snaps/snaptest-git-ceph.sh'

pass 7122940 2022-12-21 03:10:54 2022-12-21 03:10:54 2022-12-21 03:39:32 0:28:38 0:17:09 0:11:29 smithi main centos 8.stream fs/upgrade/nofs/{bluestore-bitmap centos_latest conf/{client mds mon osd} no-mds-cluster overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} tasks/{0-pacific 1-upgrade}} 1
fail 7122941 2022-12-21 03:10:55 2022-12-21 03:10:55 2022-12-21 04:10:28 0:59:33 0:39:38 0:19:55 smithi main rhel 8.6 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/secure wsync/yes} objectstore-ec/bluestore-comp-ec-root omap_limit/10 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{export-check n/3 replication/always} standby-replay tasks/{0-subvolume/{with-quota} 1-check-counter 2-scrub/no 3-snaps/yes 4-flush/no 5-workunit/suites/pjd}} 3
Failure Reason:

Command failed (workunit test suites/pjd.sh) on smithi038 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=47f77a750cb9b297b7b8ab2c8bc3d22102dbeba8 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/pjd.sh'

pass 7122942 2022-12-21 03:10:56 2022-12-21 03:10:56 2022-12-21 03:41:32 0:30:36 0:14:20 0:16:16 smithi main rhel 8.6 fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/workunit/quota} 2
pass 7122943 2022-12-21 03:10:57 2022-12-21 03:10:57 2022-12-21 04:58:47 1:47:50 1:29:06 0:18:44 smithi main centos 8.stream fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{centos_8} mount/fuse objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/admin} 2
pass 7122944 2022-12-21 03:10:57 2022-12-21 03:10:58 2022-12-21 04:18:56 1:07:58 0:43:01 0:24:57 smithi main centos 8.stream fs/upgrade/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} fail_fs/no overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn syntax} roles tasks/{0-from/pacific 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-inline/no 4-verify} 2-client 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} 2
pass 7122945 2022-12-21 03:10:58 2022-12-21 03:10:58 2022-12-21 04:10:12 0:59:14 0:48:50 0:10:24 smithi main rhel 8.6 fs/multifs/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-2c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} objectstore-ec/bluestore-comp overrides/{ignorelist_health ignorelist_wrongly_marked_down mon-debug} tasks/failover} 2
pass 7122946 2022-12-21 03:10:59 2022-12-21 03:10:59 2022-12-21 04:02:35 0:51:36 0:35:49 0:15:47 smithi main ubuntu 20.04 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} msgr-failures/none objectstore-ec/bluestore-comp overrides/{frag ignorelist_health ignorelist_wrongly_marked_down prefetch_dirfrags/no prefetch_entire_dirfrags/yes races session_timeout thrashosds-health} ranks/5 tasks/{1-thrash/mds 2-workunit/suites/ffsb}} 2
fail 7122947 2022-12-21 03:11:00 2022-12-21 03:11:00 2022-12-21 03:50:56 0:39:56 0:24:49 0:15:07 smithi main centos 8.stream fs/upgrade/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} fail_fs/yes overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn syntax} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-inline/yes 4-verify} 2-client 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} 2
Failure Reason:

Command failed on smithi079 with status 22: "sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v16.2.4 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 73b33714-80e1-11ed-97a3-001a4aab830c -e sha1=47f77a750cb9b297b7b8ab2c8bc3d22102dbeba8 -- bash -c 'ceph orch upgrade start --image quay.ceph.io/ceph-ci/ceph:$sha1 --daemon-types mgr'"

fail 7122948 2022-12-21 03:11:01 2022-12-21 03:11:01 2022-12-21 03:43:27 0:32:26 0:17:12 0:15:14 smithi main rhel 8.6 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} ms_mode/legacy wsync/no} objectstore-ec/bluestore-bitmap omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{export-check n/3 replication/default} standby-replay tasks/{0-subvolume/{with-quota} 1-check-counter 2-scrub/yes 3-snaps/no 4-flush/yes 5-workunit/kernel_untar_build}} 3
Failure Reason:

Command failed on smithi074 with status 5: 'sudo systemctl stop ceph-2cfe6eec-80e1-11ed-97a3-001a4aab830c@mon.b'

dead 7122949 2022-12-21 03:11:02 2022-12-21 03:11:02 2022-12-21 03:33:27 0:22:25 0:05:24 0:17:01 smithi main fs/mixed-clients/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-2c-client conf/{client mds mon osd} kclient-overrides/{distro/testing/k-testing ms-die-on-skipped} objectstore-ec/bluestore-comp-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down osd-asserts} tasks/kernel_cfuse_workunits_untarbuild_blogbench} 2
Failure Reason:

{'smithi066.front.sepia.ceph.com': {'_ansible_no_log': False, 'changed': True, 'cmd': ['cpan', 'Amazon::S3'], 'delta': '0:00:04.741357', 'end': '2022-12-21 03:31:17.129355', 'invocation': {'module_args': {'_raw_params': 'cpan Amazon::S3', '_uses_shell': False, 'argv': None, 'chdir': None, 'creates': None, 'executable': None, 'removes': None, 'stdin': None, 'stdin_add_newline': True, 'strip_empty_ends': True, 'warn': True}}, 'msg': 'non-zero return code', 'rc': 25, 'start': '2022-12-21 03:31:12.387998', 'stderr': '', 'stderr_lines': [], 'stdout': "Loading internal logger. Log::Log4perl recommended for better logging\nReading '/home/ubuntu/.cpan/Metadata'\n Database was generated on Fri, 12 Feb 2016 02:17:02 GMT\nFetching with LWP:\nhttp://apt-mirror.sepia.ceph.com/CPAN/authors/01mailrc.txt.gz\nReading '/home/ubuntu/.cpan/sources/authors/01mailrc.txt.gz'\n............................................................................DONE\nFetching with LWP:\nhttp://apt-mirror.sepia.ceph.com/CPAN/modules/02packages.details.txt.gz\nReading '/home/ubuntu/.cpan/sources/modules/02packages.details.txt.gz'\n Database was generated on Fri, 12 Feb 2016 02:17:02 GMT\nWarning: This index file is 2504 days old.\n Please check the host you chose as your CPAN mirror for staleness.\n I'll continue but problems seem likely to happen.\x07\n............................................................................DONE\nFetching with LWP:\nhttp://apt-mirror.sepia.ceph.com/CPAN/modules/03modlist.data.gz\nReading '/home/ubuntu/.cpan/sources/modules/03modlist.data.gz'\nDONE\nWriting /home/ubuntu/.cpan/Metadata\nRunning install for module 'Amazon::S3'\n\nWarning: checksum file '/home/ubuntu/.cpan/sources/authors/id/T/TI/TIMA/CHECKSUMS' not conforming.\n\nThe cksum does not contain the key 'cpan_path' for 'Amazon-S3-0.45.tar.gz'.\nProceed nonetheless? [no] no\nAborted.", 'stdout_lines': ['Loading internal logger. Log::Log4perl recommended for better logging', "Reading '/home/ubuntu/.cpan/Metadata'", ' Database was generated on Fri, 12 Feb 2016 02:17:02 GMT', 'Fetching with LWP:', 'http://apt-mirror.sepia.ceph.com/CPAN/authors/01mailrc.txt.gz', "Reading '/home/ubuntu/.cpan/sources/authors/01mailrc.txt.gz'", '............................................................................DONE', 'Fetching with LWP:', 'http://apt-mirror.sepia.ceph.com/CPAN/modules/02packages.details.txt.gz', "Reading '/home/ubuntu/.cpan/sources/modules/02packages.details.txt.gz'", ' Database was generated on Fri, 12 Feb 2016 02:17:02 GMT', 'Warning: This index file is 2504 days old.', ' Please check the host you chose as your CPAN mirror for staleness.', " I'll continue but problems seem likely to happen.\x07", '............................................................................DONE', 'Fetching with LWP:', 'http://apt-mirror.sepia.ceph.com/CPAN/modules/03modlist.data.gz', "Reading '/home/ubuntu/.cpan/sources/modules/03modlist.data.gz'", 'DONE', 'Writing /home/ubuntu/.cpan/Metadata', "Running install for module 'Amazon::S3'", '', "Warning: checksum file '/home/ubuntu/.cpan/sources/authors/id/T/TI/TIMA/CHECKSUMS' not conforming.", '', "The cksum does not contain the key 'cpan_path' for 'Amazon-S3-0.45.tar.gz'.", 'Proceed nonetheless? [no] no', 'Aborted.']}, 'smithi092.front.sepia.ceph.com': {'_ansible_no_log': False, 'changed': True, 'cmd': ['cpan', 'Amazon::S3'], 'delta': '0:00:05.091560', 'end': '2022-12-21 03:31:17.276514', 'invocation': {'module_args': {'_raw_params': 'cpan Amazon::S3', '_uses_shell': False, 'argv': None, 'chdir': None, 'creates': None, 'executable': None, 'removes': None, 'stdin': None, 'stdin_add_newline': True, 'strip_empty_ends': True, 'warn': True}}, 'msg': 'non-zero return code', 'rc': 25, 'start': '2022-12-21 03:31:12.184954', 'stderr': '', 'stderr_lines': [], 'stdout': "Loading internal logger. Log::Log4perl recommended for better logging\nReading '/home/ubuntu/.cpan/Metadata'\n Database was generated on Fri, 12 Feb 2016 02:17:02 GMT\nFetching with LWP:\nhttp://apt-mirror.sepia.ceph.com/CPAN/authors/01mailrc.txt.gz\nReading '/home/ubuntu/.cpan/sources/authors/01mailrc.txt.gz'\n............................................................................DONE\nFetching with LWP:\nhttp://apt-mirror.sepia.ceph.com/CPAN/modules/02packages.details.txt.gz\nReading '/home/ubuntu/.cpan/sources/modules/02packages.details.txt.gz'\n Database was generated on Fri, 12 Feb 2016 02:17:02 GMT\nWarning: This index file is 2504 days old.\n Please check the host you chose as your CPAN mirror for staleness.\n I'll continue but problems seem likely to happen.\x07\n............................................................................DONE\nFetching with LWP:\nhttp://apt-mirror.sepia.ceph.com/CPAN/modules/03modlist.data.gz\nReading '/home/ubuntu/.cpan/sources/modules/03modlist.data.gz'\nDONE\nWriting /home/ubuntu/.cpan/Metadata\nRunning install for module 'Amazon::S3'\n\nWarning: checksum file '/home/ubuntu/.cpan/sources/authors/id/T/TI/TIMA/CHECKSUMS' not conforming.\n\nThe cksum does not contain the key 'cpan_path' for 'Amazon-S3-0.45.tar.gz'.\nProceed nonetheless? [no] no\nAborted.", 'stdout_lines': ['Loading internal logger. Log::Log4perl recommended for better logging', "Reading '/home/ubuntu/.cpan/Metadata'", ' Database was generated on Fri, 12 Feb 2016 02:17:02 GMT', 'Fetching with LWP:', 'http://apt-mirror.sepia.ceph.com/CPAN/authors/01mailrc.txt.gz', "Reading '/home/ubuntu/.cpan/sources/authors/01mailrc.txt.gz'", '............................................................................DONE', 'Fetching with LWP:', 'http://apt-mirror.sepia.ceph.com/CPAN/modules/02packages.details.txt.gz', "Reading '/home/ubuntu/.cpan/sources/modules/02packages.details.txt.gz'", ' Database was generated on Fri, 12 Feb 2016 02:17:02 GMT', 'Warning: This index file is 2504 days old.', ' Please check the host you chose as your CPAN mirror for staleness.', " I'll continue but problems seem likely to happen.\x07", '............................................................................DONE', 'Fetching with LWP:', 'http://apt-mirror.sepia.ceph.com/CPAN/modules/03modlist.data.gz', "Reading '/home/ubuntu/.cpan/sources/modules/03modlist.data.gz'", 'DONE', 'Writing /home/ubuntu/.cpan/Metadata', "Running install for module 'Amazon::S3'", '', "Warning: checksum file '/home/ubuntu/.cpan/sources/authors/id/T/TI/TIMA/CHECKSUMS' not conforming.", '', "The cksum does not contain the key 'cpan_path' for 'Amazon-S3-0.45.tar.gz'.", 'Proceed nonetheless? [no] no', 'Aborted.']}}

fail 7122950 2022-12-21 03:11:03 2022-12-21 03:11:03 2022-12-21 09:59:33 6:48:30 6:35:45 0:12:45 smithi main rhel 8.6 fs/verify/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{k-testing mount ms-die-on-skipped} objectstore-ec/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down mon-debug session_timeout} ranks/5 tasks/fsstress validater/valgrind} 2
Failure Reason:

Command failed (workunit test suites/fsstress.sh) on smithi052 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=47f77a750cb9b297b7b8ab2c8bc3d22102dbeba8 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/fsstress.sh'

fail 7122951 2022-12-21 03:11:04 2022-12-21 03:11:04 2022-12-21 03:49:50 0:38:46 0:19:02 0:19:44 smithi main ubuntu 20.04 fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/fuse objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/client-limits} 2
Failure Reason:

Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)

pass 7122952 2022-12-21 03:11:05 2022-12-21 03:11:05 2022-12-21 04:14:04 1:02:59 0:41:12 0:21:47 smithi main ubuntu 20.04 fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/client-recovery} 2
fail 7122953 2022-12-21 03:11:06 2022-12-21 03:11:06 2022-12-21 04:19:13 1:08:07 0:43:08 0:24:59 smithi main rhel 8.6 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/fuse objectstore-ec/bluestore-comp omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{export-check n/3 replication/always} standby-replay tasks/{0-subvolume/{with-namespace-isolated} 1-check-counter 2-scrub/yes 3-snaps/no 4-flush/yes 5-workunit/suites/blogbench}} 3
Failure Reason:

error during scrub thrashing: rank damage found: {'backtrace'}

fail 7122954 2022-12-21 03:11:07 2022-12-21 03:11:07 2022-12-21 04:00:45 0:49:38 0:29:57 0:19:41 smithi main ubuntu 20.04 fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/fuse objectstore/bluestore-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/damage} 2
Failure Reason:

Test failure: test_open_ino_errors (tasks.cephfs.test_damage.TestDamage)

pass 7122955 2022-12-21 03:11:08 2022-12-21 03:11:08 2022-12-21 03:52:18 0:41:10 0:22:35 0:18:35 smithi main ubuntu 20.04 fs/multiclient/{begin/{0-install 1-ceph 2-logrotate} clusters/1-mds-3-client conf/{client mds mon osd} distros/ubuntu_latest mount/fuse objectstore-ec/bluestore-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down} tasks/mdtest} 5
fail 7122956 2022-12-21 03:11:09 2022-12-21 03:11:09 2022-12-21 03:57:52 0:46:43 0:23:42 0:23:01 smithi main centos 8.stream fs/upgrade/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} fail_fs/yes overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn syntax} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-inline/yes 4-verify} 2-client 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} 2
Failure Reason:

Command failed on smithi077 with status 22: "sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v16.2.4 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 7ef2ec7c-80e2-11ed-97a3-001a4aab830c -e sha1=47f77a750cb9b297b7b8ab2c8bc3d22102dbeba8 -- bash -c 'ceph orch upgrade start --image quay.ceph.io/ceph-ci/ceph:$sha1 --daemon-types mgr'"

pass 7122957 2022-12-21 03:11:10 2022-12-21 03:11:10 2022-12-21 04:54:34 1:43:24 1:19:13 0:24:11 smithi main rhel 8.6 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/crc wsync/yes} objectstore-ec/bluestore-ec-root omap_limit/10 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{export-check n/5 replication/default} standby-replay tasks/{0-subvolume/{no-subvolume} 1-check-counter 2-scrub/no 3-snaps/yes 4-flush/no 5-workunit/suites/dbench}} 3
pass 7122958 2022-12-21 03:11:11 2022-12-21 03:11:11 2022-12-21 04:29:11 1:18:00 0:58:49 0:19:11 smithi main rhel 8.6 fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} objectstore/bluestore-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/exports} 2
pass 7122959 2022-12-21 03:11:12 2022-12-21 03:11:12 2022-12-21 03:58:32 0:47:20 0:28:37 0:18:43 smithi main centos 8.stream fs/upgrade/featureful_client/upgraded_client/{bluestore-bitmap centos_latest clusters/1-mds-2-client-micro conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down multimds/yes pg-warn} tasks/{0-octopus 1-client 2-upgrade 3-client-upgrade 4-compat_client 5-client-sanity}} 3
pass 7122960 2022-12-21 03:11:13 2022-12-21 03:11:13 2022-12-21 04:12:32 1:01:19 0:45:55 0:15:24 smithi main rhel 8.6 fs/mixed-clients/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-2c-client conf/{client mds mon osd} kclient-overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped} objectstore-ec/bluestore-comp overrides/{ignorelist_health ignorelist_wrongly_marked_down osd-asserts} tasks/kernel_cfuse_workunits_dbench_iozone} 2
fail 7122961 2022-12-21 03:11:14 2022-12-21 03:11:14 2022-12-21 03:54:38 0:43:24 0:32:19 0:11:05 smithi main rhel 8.6 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} ms_mode/legacy wsync/no} objectstore-ec/bluestore-bitmap omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{export-check n/5 replication/always} standby-replay tasks/{0-subvolume/{with-namespace-isolated-and-quota} 1-check-counter 2-scrub/yes 3-snaps/no 4-flush/yes 5-workunit/suites/ffsb}} 3
Failure Reason:

error during scrub thrashing: rank damage found: {'backtrace'}

pass 7122962 2022-12-21 03:11:15 2022-12-21 03:11:15 2022-12-21 03:47:29 0:36:14 0:21:07 0:15:07 smithi main rhel 8.6 fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} objectstore/bluestore-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/fragment} 2
pass 7122963 2022-12-21 03:11:16 2022-12-21 03:11:16 2022-12-21 04:05:39 0:54:23 0:30:50 0:23:33 smithi main centos 8.stream fs/upgrade/featureful_client/old_client/{bluestore-bitmap centos_latest clusters/1-mds-2-client-micro conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down multimds/no pg-warn} tasks/{0-octopus 1-client 2-upgrade 3-compat_client/quincy}} 3
fail 7122964 2022-12-21 03:11:17 2022-12-21 03:11:17 2022-12-21 03:56:49 0:45:32 0:23:44 0:21:48 smithi main centos 8.stream fs/upgrade/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} fail_fs/yes overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn syntax} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-inline/yes 4-verify} 2-client 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} 2
Failure Reason:

Command failed on smithi084 with status 22: "sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v16.2.4 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 52551668-80e2-11ed-97a3-001a4aab830c -e sha1=47f77a750cb9b297b7b8ab2c8bc3d22102dbeba8 -- bash -c 'ceph orch upgrade start --image quay.ceph.io/ceph-ci/ceph:$sha1 --daemon-types mgr'"

fail 7122965 2022-12-21 03:11:17 2022-12-21 03:11:18 2022-12-21 04:04:03 0:52:45 0:39:52 0:12:53 smithi main centos 8.stream fs/mirror-ha/{begin/{0-install 1-ceph 2-logrotate} cephfs-mirror/three-per-cluster clients/{mirror} cluster/{1-node} objectstore/bluestore-bitmap overrides/{whitelist_health} supported-random-distro$/{centos_8} workloads/cephfs-mirror-ha-workunit} 1
Failure Reason:

reached maximum tries (50) after waiting for 300 seconds

pass 7122966 2022-12-21 03:11:18 2022-12-21 03:11:19 2022-12-21 05:59:59 2:48:40 2:29:55 0:18:45 smithi main centos 8.stream fs/valgrind/{begin/{0-install 1-ceph 2-logrotate} centos_latest debug mirror/{cephfs-mirror/one-per-cluster clients/mirror cluster/1-node mount/fuse overrides/whitelist_health tasks/mirror}} 1
pass 7122967 2022-12-21 03:11:19 2022-12-21 03:11:19 2022-12-21 03:52:07 0:40:48 0:18:15 0:22:33 smithi main centos 8.stream fs/upgrade/nofs/{bluestore-bitmap centos_latest conf/{client mds mon osd} no-mds-cluster overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} tasks/{0-pacific 1-upgrade}} 1
dead 7122968 2022-12-21 03:11:20 2022-12-21 03:11:20 2022-12-21 03:36:58 0:25:38 0:05:47 0:19:51 smithi main fs/mixed-clients/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-2c-client conf/{client mds mon osd} kclient-overrides/{distro/testing/k-testing ms-die-on-skipped} objectstore-ec/bluestore-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down osd-asserts} tasks/kernel_cfuse_workunits_untarbuild_blogbench} 2
Failure Reason:

{'smithi017.front.sepia.ceph.com': {'_ansible_no_log': False, 'changed': True, 'cmd': ['cpan', 'Amazon::S3'], 'delta': '0:00:04.768278', 'end': '2022-12-21 03:35:20.817253', 'invocation': {'module_args': {'_raw_params': 'cpan Amazon::S3', '_uses_shell': False, 'argv': None, 'chdir': None, 'creates': None, 'executable': None, 'removes': None, 'stdin': None, 'stdin_add_newline': True, 'strip_empty_ends': True, 'warn': True}}, 'msg': 'non-zero return code', 'rc': 25, 'start': '2022-12-21 03:35:16.048975', 'stderr': '', 'stderr_lines': [], 'stdout': "Loading internal logger. Log::Log4perl recommended for better logging\nReading '/home/ubuntu/.cpan/Metadata'\n Database was generated on Fri, 12 Feb 2016 02:17:02 GMT\nFetching with LWP:\nhttp://apt-mirror.sepia.ceph.com/CPAN/authors/01mailrc.txt.gz\nReading '/home/ubuntu/.cpan/sources/authors/01mailrc.txt.gz'\n............................................................................DONE\nFetching with LWP:\nhttp://apt-mirror.sepia.ceph.com/CPAN/modules/02packages.details.txt.gz\nReading '/home/ubuntu/.cpan/sources/modules/02packages.details.txt.gz'\n Database was generated on Fri, 12 Feb 2016 02:17:02 GMT\nWarning: This index file is 2504 days old.\n Please check the host you chose as your CPAN mirror for staleness.\n I'll continue but problems seem likely to happen.\x07\n............................................................................DONE\nFetching with LWP:\nhttp://apt-mirror.sepia.ceph.com/CPAN/modules/03modlist.data.gz\nReading '/home/ubuntu/.cpan/sources/modules/03modlist.data.gz'\nDONE\nWriting /home/ubuntu/.cpan/Metadata\nRunning install for module 'Amazon::S3'\n\nWarning: checksum file '/home/ubuntu/.cpan/sources/authors/id/T/TI/TIMA/CHECKSUMS' not conforming.\n\nThe cksum does not contain the key 'cpan_path' for 'Amazon-S3-0.45.tar.gz'.\nProceed nonetheless? [no] no\nAborted.", 'stdout_lines': ['Loading internal logger. Log::Log4perl recommended for better logging', "Reading '/home/ubuntu/.cpan/Metadata'", ' Database was generated on Fri, 12 Feb 2016 02:17:02 GMT', 'Fetching with LWP:', 'http://apt-mirror.sepia.ceph.com/CPAN/authors/01mailrc.txt.gz', "Reading '/home/ubuntu/.cpan/sources/authors/01mailrc.txt.gz'", '............................................................................DONE', 'Fetching with LWP:', 'http://apt-mirror.sepia.ceph.com/CPAN/modules/02packages.details.txt.gz', "Reading '/home/ubuntu/.cpan/sources/modules/02packages.details.txt.gz'", ' Database was generated on Fri, 12 Feb 2016 02:17:02 GMT', 'Warning: This index file is 2504 days old.', ' Please check the host you chose as your CPAN mirror for staleness.', " I'll continue but problems seem likely to happen.\x07", '............................................................................DONE', 'Fetching with LWP:', 'http://apt-mirror.sepia.ceph.com/CPAN/modules/03modlist.data.gz', "Reading '/home/ubuntu/.cpan/sources/modules/03modlist.data.gz'", 'DONE', 'Writing /home/ubuntu/.cpan/Metadata', "Running install for module 'Amazon::S3'", '', "Warning: checksum file '/home/ubuntu/.cpan/sources/authors/id/T/TI/TIMA/CHECKSUMS' not conforming.", '', "The cksum does not contain the key 'cpan_path' for 'Amazon-S3-0.45.tar.gz'.", 'Proceed nonetheless? [no] no', 'Aborted.']}, 'smithi039.front.sepia.ceph.com': {'_ansible_no_log': False, 'changed': True, 'cmd': ['cpan', 'Amazon::S3'], 'delta': '0:00:04.837805', 'end': '2022-12-21 03:35:21.008432', 'invocation': {'module_args': {'_raw_params': 'cpan Amazon::S3', '_uses_shell': False, 'argv': None, 'chdir': None, 'creates': None, 'executable': None, 'removes': None, 'stdin': None, 'stdin_add_newline': True, 'strip_empty_ends': True, 'warn': True}}, 'msg': 'non-zero return code', 'rc': 25, 'start': '2022-12-21 03:35:16.170627', 'stderr': '', 'stderr_lines': [], 'stdout': "Loading internal logger. Log::Log4perl recommended for better logging\nReading '/home/ubuntu/.cpan/Metadata'\n Database was generated on Fri, 12 Feb 2016 02:17:02 GMT\nFetching with LWP:\nhttp://apt-mirror.sepia.ceph.com/CPAN/authors/01mailrc.txt.gz\nReading '/home/ubuntu/.cpan/sources/authors/01mailrc.txt.gz'\n............................................................................DONE\nFetching with LWP:\nhttp://apt-mirror.sepia.ceph.com/CPAN/modules/02packages.details.txt.gz\nReading '/home/ubuntu/.cpan/sources/modules/02packages.details.txt.gz'\n Database was generated on Fri, 12 Feb 2016 02:17:02 GMT\nWarning: This index file is 2504 days old.\n Please check the host you chose as your CPAN mirror for staleness.\n I'll continue but problems seem likely to happen.\x07\n............................................................................DONE\nFetching with LWP:\nhttp://apt-mirror.sepia.ceph.com/CPAN/modules/03modlist.data.gz\nReading '/home/ubuntu/.cpan/sources/modules/03modlist.data.gz'\nDONE\nWriting /home/ubuntu/.cpan/Metadata\nRunning install for module 'Amazon::S3'\n\nWarning: checksum file '/home/ubuntu/.cpan/sources/authors/id/T/TI/TIMA/CHECKSUMS' not conforming.\n\nThe cksum does not contain the key 'cpan_path' for 'Amazon-S3-0.45.tar.gz'.\nProceed nonetheless? [no] no\nAborted.", 'stdout_lines': ['Loading internal logger. Log::Log4perl recommended for better logging', "Reading '/home/ubuntu/.cpan/Metadata'", ' Database was generated on Fri, 12 Feb 2016 02:17:02 GMT', 'Fetching with LWP:', 'http://apt-mirror.sepia.ceph.com/CPAN/authors/01mailrc.txt.gz', "Reading '/home/ubuntu/.cpan/sources/authors/01mailrc.txt.gz'", '............................................................................DONE', 'Fetching with LWP:', 'http://apt-mirror.sepia.ceph.com/CPAN/modules/02packages.details.txt.gz', "Reading '/home/ubuntu/.cpan/sources/modules/02packages.details.txt.gz'", ' Database was generated on Fri, 12 Feb 2016 02:17:02 GMT', 'Warning: This index file is 2504 days old.', ' Please check the host you chose as your CPAN mirror for staleness.', " I'll continue but problems seem likely to happen.\x07", '............................................................................DONE', 'Fetching with LWP:', 'http://apt-mirror.sepia.ceph.com/CPAN/modules/03modlist.data.gz', "Reading '/home/ubuntu/.cpan/sources/modules/03modlist.data.gz'", 'DONE', 'Writing /home/ubuntu/.cpan/Metadata', "Running install for module 'Amazon::S3'", '', "Warning: checksum file '/home/ubuntu/.cpan/sources/authors/id/T/TI/TIMA/CHECKSUMS' not conforming.", '', "The cksum does not contain the key 'cpan_path' for 'Amazon-S3-0.45.tar.gz'.", 'Proceed nonetheless? [no] no', 'Aborted.']}}

fail 7122969 2022-12-21 03:11:21 2022-12-21 03:11:21 2022-12-21 03:55:12 0:43:51 0:21:05 0:22:46 smithi main rhel 8.6 fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{rhel_8} mount/fuse objectstore/bluestore-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/pool-perm} 2
Failure Reason:

Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)

fail 7122970 2022-12-21 03:11:22 2022-12-21 03:11:22 2022-12-21 10:07:56 6:56:34 6:33:55 0:22:39 smithi main centos 8.stream fs/verify/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{centos_8} mount/kclient/{k-testing mount ms-die-on-skipped} objectstore-ec/bluestore-comp overrides/{ignorelist_health ignorelist_wrongly_marked_down mon-debug session_timeout} ranks/3 tasks/fsstress validater/valgrind} 2
Failure Reason:

Command failed (workunit test suites/fsstress.sh) on smithi110 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=47f77a750cb9b297b7b8ab2c8bc3d22102dbeba8 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/fsstress.sh'

fail 7122971 2022-12-21 03:11:23 2022-12-21 03:11:23 2022-12-21 03:59:22 0:47:59 0:28:01 0:19:58 smithi main ubuntu 20.04 fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/fuse objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/scrub} 2
Failure Reason:

Test failure: test_scrub_backtrace (tasks.cephfs.test_scrub.TestScrub)

pass 7122972 2022-12-21 03:11:24 2022-12-21 03:11:24 2022-12-21 04:27:27 1:16:03 0:55:45 0:20:18 smithi main centos 8.stream fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/snap-schedule} 2
fail 7122973 2022-12-21 03:11:25 2022-12-21 03:11:25 2022-12-21 04:26:10 1:14:45 0:50:49 0:23:56 smithi main rhel 8.6 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/fuse objectstore-ec/bluestore-comp omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{export-check n/3 replication/default} standby-replay tasks/{0-subvolume/{with-namespace-isolated-and-quota} 1-check-counter 2-scrub/yes 3-snaps/no 4-flush/yes 5-workunit/suites/iogen}} 3
Failure Reason:

error during scrub thrashing: rank damage found: {'backtrace'}

fail 7122974 2022-12-21 03:11:26 2022-12-21 03:11:26 2022-12-21 03:56:48 0:45:22 0:28:41 0:16:41 smithi main rhel 8.6 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-comp-ec-root overrides/{frag ignorelist_health ignorelist_wrongly_marked_down prefetch_dirfrags/yes prefetch_entire_dirfrags/no races session_timeout thrashosds-health} ranks/3 tasks/{1-thrash/mds 2-workunit/fs/snaps}} 2
Failure Reason:

Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi111 with status 128: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=47f77a750cb9b297b7b8ab2c8bc3d22102dbeba8 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/fs/snaps/snaptest-git-ceph.sh'

fail 7122975 2022-12-21 03:11:27 2022-12-21 03:11:27 2022-12-21 03:58:24 0:46:57 0:23:41 0:23:16 smithi main centos 8.stream fs/upgrade/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} fail_fs/yes overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn syntax} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-inline/yes 4-verify} 2-client 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} 2
Failure Reason:

Command failed on smithi132 with status 22: "sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v16.2.4 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 62424866-80e2-11ed-97a3-001a4aab830c -e sha1=47f77a750cb9b297b7b8ab2c8bc3d22102dbeba8 -- bash -c 'ceph orch upgrade start --image quay.ceph.io/ceph-ci/ceph:$sha1 --daemon-types mgr'"