Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
dead 7281754 2023-05-21 14:17:20 2023-05-21 14:18:45 2023-05-21 14:19:49 0:01:04 smithi stdin-killer ubuntu 22.04 fs/multiclient/{begin/{0-install 1-ceph 2-logrotate} clusters/1-mds-2-client conf/{client mds mon osd} distros/ubuntu_latest mount/fuse objectstore-ec/bluestore-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down} tasks/mdtest} 4
Failure Reason:

Error reimaging machines: Failed to power on smithi072

fail 7281755 2023-05-21 14:17:21 2023-05-21 14:18:46 2023-05-21 15:17:05 0:58:19 0:39:45 0:18:34 smithi stdin-killer ubuntu 22.04 fs/multifs/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-2c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} objectstore-ec/bluestore-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down mon-debug} tasks/failover} 2
Failure Reason:

Test failure: test_shutdown_killpoint_SHUTDOWN_EMPTYSUBTREES (tasks.cephfs.test_failover.TestShutdownKillpoints)

dead 7281756 2023-05-21 14:17:22 2023-05-21 14:18:46 2023-05-22 02:27:16 12:08:30 smithi stdin-killer centos 8.stream fs/upgrade/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} fail_fs/no overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn syntax} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-inline/no 4-verify} 2-client 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} 2
Failure Reason:

hit max job timeout

dead 7281757 2023-05-21 14:17:23 2023-05-21 14:18:47 2023-05-21 14:19:51 0:01:04 smithi stdin-killer ubuntu 20.04 fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{ubuntu_20.04} mount/fuse objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/quota} 2
Failure Reason:

Error reimaging machines: Failed to power on smithi047

fail 7281758 2023-05-21 14:17:24 2023-05-21 14:18:29 2023-05-21 14:56:35 0:38:06 0:26:11 0:11:55 smithi stdin-killer ubuntu 22.04 fs/mixed-clients/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-2c-client conf/{client mds mon osd} distro/{ubuntu/{latest overrides}} kclient-overrides/{distro/testing/k-testing ms-die-on-skipped} objectstore-ec/bluestore-comp-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down osd-asserts} tasks/kernel_cfuse_workunits_untarbuild_blogbench} 2
Failure Reason:

Command failed (workunit test kernel_untar_build.sh) on smithi144 with status 2: 'mkdir -p -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && cd -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=83caa3d7936db5043e7b3c387d3087f4af006726 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="1" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.1 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.1 CEPH_MNT=/home/ubuntu/cephtest/mnt.1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.1/qa/workunits/kernel_untar_build.sh'

fail 7281759 2023-05-21 14:17:24 2023-05-21 14:18:47 2023-05-21 19:00:50 4:42:03 4:32:50 0:09:13 smithi stdin-killer centos 8.stream fs/upgrade/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} fail_fs/yes overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn syntax} roles tasks/{0-from/pacific 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-inline/yes 4-verify} 2-client 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} 2
Failure Reason:

Cannot connect to remote host smithi033

fail 7281760 2023-05-21 14:17:25 2023-05-21 14:18:47 2023-05-21 14:31:25 0:12:38 smithi stdin-killer centos 8.stream fs/fscrypt/{begin/{0-install 1-ceph 2-logrotate} bluestore-bitmap clusters/1-mds-1-client conf/{client mds mon osd} distro/{centos_latest} mount/kclient/{mount-syntax/v1 mount overrides/{distro/testing/k-testing}} overrides/{ignorelist_health ignorelist_health_more ignorelist_wrongly_marked_down pg-warn} tasks/fscrypt-common} 3
Failure Reason:

Command failed on smithi164 with status 1: 'sudo rpm -ivh --oldpackage --replacefiles --replacepkgs /tmp/kernel.x86_64.rpm'

fail 7281761 2023-05-21 14:17:26 2023-05-21 14:18:48 2023-05-21 15:03:26 0:44:38 0:32:58 0:11:40 smithi stdin-killer ubuntu 22.04 fs/mirror-ha/{begin/{0-install 1-ceph 2-logrotate} cephfs-mirror/three-per-cluster clients/{mirror} cluster/{1-node} objectstore/bluestore-bitmap overrides/{whitelist_health} supported-random-distro$/{ubuntu_latest} workloads/cephfs-mirror-ha-workunit} 1
Failure Reason:

reached maximum tries (50) after waiting for 300 seconds

fail 7281762 2023-05-21 14:17:27 2023-05-21 14:18:48 2023-05-21 14:57:24 0:38:36 0:20:43 0:17:53 smithi stdin-killer ubuntu 22.04 fs/multiclient/{begin/{0-install 1-ceph 2-logrotate} clusters/1-mds-2-client conf/{client mds mon osd} distros/ubuntu_latest mount/fuse objectstore-ec/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down} tasks/cephfs_misc_tests} 4
Failure Reason:

Test failure: test_cap_revoke_nonresponder (tasks.cephfs.test_misc.TestMisc)

dead 7281763 2023-05-21 14:17:28 2023-05-21 14:18:49 2023-05-21 14:19:53 0:01:04 smithi stdin-killer ubuntu 20.04 fs/verify/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu/{latest overrides}} mount/fuse objectstore-ec/bluestore-comp-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down mon-debug session_timeout} ranks/1 tasks/fsstress validater/valgrind} 2
Failure Reason:

Error reimaging machines: Failed to power on smithi039

fail 7281764 2023-05-21 14:17:29 2023-05-21 14:18:49 2023-05-21 15:10:43 0:51:54 0:40:08 0:11:46 smithi stdin-killer rhel 8.6 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{rhel_8} mount/fuse msgr-failures/osd-mds-delay objectstore-ec/bluestore-bitmap overrides/{frag ignorelist_health ignorelist_wrongly_marked_down prefetch_dirfrags/no prefetch_entire_dirfrags/yes races session_timeout thrashosds-health} ranks/5 tasks/{1-thrash/mds 2-workunit/fs/snaps}} 2
Failure Reason:

Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi002 with status 135: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=83caa3d7936db5043e7b3c387d3087f4af006726 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/fs/snaps/snaptest-git-ceph.sh'

fail 7281765 2023-05-21 14:17:30 2023-05-21 14:18:50 2023-05-21 14:53:26 0:34:36 0:24:35 0:10:01 smithi stdin-killer rhel 8.6 fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{ubuntu_20.04} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} objectstore/bluestore-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/strays} 2
Failure Reason:

Test failure: test_open_inode (tasks.cephfs.test_strays.TestStrays)

dead 7281766 2023-05-21 14:17:31 2023-05-21 14:18:50 2023-05-21 14:19:54 0:01:04 smithi stdin-killer rhel 8.6 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/secure wsync/yes} objectstore-ec/bluestore-bitmap omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{export-check n/3 replication/always} standby-replay tasks/{0-subvolume/{no-subvolume} 1-check-counter 2-scrub/yes 3-snaps/yes 4-flush/yes 5-workunit/fs/misc}} 3
Failure Reason:

Error reimaging machines: Failed to power on smithi008

fail 7281767 2023-05-21 14:17:31 2023-05-21 14:18:50 2023-05-21 14:45:52 0:27:02 0:12:00 0:15:02 smithi stdin-killer rhel 8.6 fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/test_journal_migration} 2
Failure Reason:

{'smithi186.front.sepia.ceph.com': {'_ansible_no_log': False, 'changed': False, 'exception': 'Traceback (most recent call last):\r\n File "/home/ubuntu/.ansible/tmp/ansible-tmp-1684680080.1533403-5863-160785222538821/AnsiballZ_dnf.py", line 102, in <module>\r\n _ansiballz_main()\r\n File "/home/ubuntu/.ansible/tmp/ansible-tmp-1684680080.1533403-5863-160785222538821/AnsiballZ_dnf.py", line 94, in _ansiballz_main\r\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\r\n File "/home/ubuntu/.ansible/tmp/ansible-tmp-1684680080.1533403-5863-160785222538821/AnsiballZ_dnf.py", line 40, in invoke_module\r\n runpy.run_module(mod_name=\'ansible.modules.dnf\', init_globals=None, run_name=\'__main__\', alter_sys=True)\r\n File "/usr/lib64/python3.6/runpy.py", line 205, in run_module\r\n return _run_module_code(code, init_globals, run_name, mod_spec)\r\n File "/usr/lib64/python3.6/runpy.py", line 96, in _run_module_code\r\n mod_name, mod_spec, pkg_name, script_name)\r\n File "/usr/lib64/python3.6/runpy.py", line 85, in _run_code\r\n exec(code, run_globals)\r\n File "/tmp/ansible_ansible.legacy.dnf_payload_0mn6ajd8/ansible_ansible.legacy.dnf_payload.zip/ansible/modules/dnf.py", line 1346, in <module>\r\n File "/tmp/ansible_ansible.legacy.dnf_payload_0mn6ajd8/ansible_ansible.legacy.dnf_payload.zip/ansible/modules/dnf.py", line 1335, in main\r\n File "/tmp/ansible_ansible.legacy.dnf_payload_0mn6ajd8/ansible_ansible.legacy.dnf_payload.zip/ansible/modules/dnf.py", line 1310, in run\r\n File "/tmp/ansible_ansible.legacy.dnf_payload_0mn6ajd8/ansible_ansible.legacy.dnf_payload.zip/ansible/modules/dnf.py", line 1209, in ensure\r\n File "/usr/lib/python3.6/site-packages/dnf/base.py", line 1297, in _sig_check_pkg\r\n sigresult = dnf.rpm.miscutils.checkSig(ts, po.localPkg())\r\n File "/usr/lib/python3.6/site-packages/dnf/rpm/miscutils.py", line 102, in checkSig\r\n fdno = os.open(package, os.O_RDONLY|os.O_NOCTTY|os.O_CLOEXEC)\r\nFileNotFoundError: [Errno 2] No such file or directory: \'/var/cache/dnf/rhel-8-for-x86_64-appstream-rpms-4982e5444e88a9d1/packages/librbd1-12.2.7-9.el8.x86_64.rpm\'\r\n', 'module_stderr': 'Shared connection to smithi186.front.sepia.ceph.com closed.\r\n', 'module_stdout': 'Traceback (most recent call last):\r\n File "/home/ubuntu/.ansible/tmp/ansible-tmp-1684680080.1533403-5863-160785222538821/AnsiballZ_dnf.py", line 102, in <module>\r\n _ansiballz_main()\r\n File "/home/ubuntu/.ansible/tmp/ansible-tmp-1684680080.1533403-5863-160785222538821/AnsiballZ_dnf.py", line 94, in _ansiballz_main\r\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\r\n File "/home/ubuntu/.ansible/tmp/ansible-tmp-1684680080.1533403-5863-160785222538821/AnsiballZ_dnf.py", line 40, in invoke_module\r\n runpy.run_module(mod_name=\'ansible.modules.dnf\', init_globals=None, run_name=\'__main__\', alter_sys=True)\r\n File "/usr/lib64/python3.6/runpy.py", line 205, in run_module\r\n return _run_module_code(code, init_globals, run_name, mod_spec)\r\n File "/usr/lib64/python3.6/runpy.py", line 96, in _run_module_code\r\n mod_name, mod_spec, pkg_name, script_name)\r\n File "/usr/lib64/python3.6/runpy.py", line 85, in _run_code\r\n exec(code, run_globals)\r\n File "/tmp/ansible_ansible.legacy.dnf_payload_0mn6ajd8/ansible_ansible.legacy.dnf_payload.zip/ansible/modules/dnf.py", line 1346, in <module>\r\n File "/tmp/ansible_ansible.legacy.dnf_payload_0mn6ajd8/ansible_ansible.legacy.dnf_payload.zip/ansible/modules/dnf.py", line 1335, in main\r\n File "/tmp/ansible_ansible.legacy.dnf_payload_0mn6ajd8/ansible_ansible.legacy.dnf_payload.zip/ansible/modules/dnf.py", line 1310, in run\r\n File "/tmp/ansible_ansible.legacy.dnf_payload_0mn6ajd8/ansible_ansible.legacy.dnf_payload.zip/ansible/modules/dnf.py", line 1209, in ensure\r\n File "/usr/lib/python3.6/site-packages/dnf/base.py", line 1297, in _sig_check_pkg\r\n sigresult = dnf.rpm.miscutils.checkSig(ts, po.localPkg())\r\n File "/usr/lib/python3.6/site-packages/dnf/rpm/miscutils.py", line 102, in checkSig\r\n fdno = os.open(package, os.O_RDONLY|os.O_NOCTTY|os.O_CLOEXEC)\r\nFileNotFoundError: [Errno 2] No such file or directory: \'/var/cache/dnf/rhel-8-for-x86_64-appstream-rpms-4982e5444e88a9d1/packages/librbd1-12.2.7-9.el8.x86_64.rpm\'\r\n', 'msg': 'MODULE FAILURE\nSee stdout/stderr for the exact error', 'rc': 1}}

fail 7281768 2023-05-21 14:17:32 2023-05-21 14:18:51 2023-05-21 14:48:27 0:29:36 0:16:08 0:13:28 smithi stdin-killer rhel 8.6 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/crc wsync/yes} objectstore-ec/bluestore-comp omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{export-check n/5 replication/always} standby-replay tasks/{0-subvolume/{with-quota} 1-check-counter 2-scrub/yes 3-snaps/yes 4-flush/yes 5-workunit/postgres}} 3
Failure Reason:

{'smithi165.front.sepia.ceph.com': {'changed': False, 'msg': 'All items completed', 'results': [{'_ansible_item_label': {'key': 'vg_nvme', 'value': {'pvs': '/dev/nvme0n1'}}, '_ansible_no_log': False, 'ansible_loop_var': 'item', 'changed': False, 'err': " /dev/vg_nvme: already exists in filesystem\n Run `vgcreate --help' for more information.\n", 'failed': True, 'invocation': {'module_args': {'force': False, 'pesize': '4', 'pv_options': '', 'pvresize': False, 'pvs': ['/dev/nvme0n1'], 'state': 'present', 'vg': 'vg_nvme', 'vg_options': ''}}, 'item': {'key': 'vg_nvme', 'value': {'pvs': '/dev/nvme0n1'}}, 'msg': "Creating volume group 'vg_nvme' failed", 'rc': 3}]}}

fail 7281769 2023-05-21 14:17:33 2023-05-21 14:18:51 2023-05-21 17:50:16 3:31:25 3:20:52 0:10:33 smithi stdin-killer rhel 8.6 fs/mixed-clients/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-2c-client conf/{client mds mon osd} distro/{rhel_8} kclient-overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped} objectstore-ec/bluestore-comp overrides/{ignorelist_health ignorelist_wrongly_marked_down osd-asserts} tasks/kernel_cfuse_workunits_dbench_iozone} 2
Failure Reason:

Command failed (workunit test suites/dbench.sh) on smithi172 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && cd -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=83caa3d7936db5043e7b3c387d3087f4af006726 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="1" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.1 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.1 CEPH_MNT=/home/ubuntu/cephtest/mnt.1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.1/qa/workunits/suites/dbench.sh'

fail 7281770 2023-05-21 14:17:34 2023-05-21 14:18:52 2023-05-21 19:00:35 4:41:43 4:29:56 0:11:47 smithi stdin-killer rhel 8.6 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} ms_mode/legacy wsync/no} objectstore-ec/bluestore-ec-root omap_limit/10 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{export-check n/3 replication/default} standby-replay tasks/{0-subvolume/{with-quota} 1-check-counter 2-scrub/no 3-snaps/no 4-flush/no 5-workunit/suites/blogbench}} 3
Failure Reason:

Cannot connect to remote host smithi060

fail 7281771 2023-05-21 14:17:35 2023-05-21 14:18:52 2023-05-21 19:01:52 4:43:00 4:30:26 0:12:34 smithi stdin-killer centos 8.stream fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} objectstore/bluestore-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/xfstests-dev} 2
Failure Reason:

Test failure: test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev), test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev)

fail 7281772 2023-05-21 14:17:36 2023-05-21 19:01:36 15944 smithi stdin-killer ubuntu 20.04 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu_20.04} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-comp overrides/{frag ignorelist_health ignorelist_wrongly_marked_down prefetch_dirfrags/yes prefetch_entire_dirfrags/yes races session_timeout thrashosds-health} ranks/3 tasks/{1-thrash/osd 2-workunit/suites/fsstress}} 2
dead 7281773 2023-05-21 14:17:37 2023-05-21 14:18:53 2023-05-21 14:19:57 0:01:04 smithi stdin-killer centos 8.stream fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{centos_8} mount/fuse objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/workunit/quota} 2
Failure Reason:

Error reimaging machines: Failed to power on smithi103

fail 7281774 2023-05-21 14:17:37 2023-05-21 14:18:53 2023-05-21 18:06:06 3:47:13 3:33:35 0:13:38 smithi stdin-killer rhel 8.6 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/secure wsync/yes} objectstore-ec/bluestore-bitmap omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/1 standby-replay tasks/{0-subvolume/{with-no-extra-options} 1-check-counter 2-scrub/yes 3-snaps/yes 4-flush/yes 5-workunit/suites/dbench}} 3
Failure Reason:

Command failed (workunit test suites/dbench.sh) on smithi031 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=83caa3d7936db5043e7b3c387d3087f4af006726 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/dbench.sh'

fail 7281775 2023-05-21 14:17:38 2023-05-21 14:18:54 2023-05-21 14:52:32 0:33:38 0:17:00 0:16:38 smithi stdin-killer ubuntu 22.04 fs/multiclient/{begin/{0-install 1-ceph 2-logrotate} clusters/1-mds-3-client conf/{client mds mon osd} distros/ubuntu_latest mount/fuse objectstore-ec/bluestore-comp-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down} tasks/ior-shared-file} 5
Failure Reason:

Command failed on smithi100 with status 2: 'TESTDIR=/home/ubuntu/cephtest bash -s'

dead 7281776 2023-05-21 14:17:39 2023-05-21 14:18:54 2023-05-21 14:19:58 0:01:04 smithi stdin-killer ubuntu 20.04 fs/verify/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu/{latest overrides}} mount/kclient/{k-testing mount ms-die-on-skipped} objectstore-ec/bluestore-comp overrides/{ignorelist_health ignorelist_wrongly_marked_down mon-debug session_timeout} ranks/3 tasks/dbench validater/lockdep} 2
Failure Reason:

Error reimaging machines: Failed to power on smithi061

dead 7281777 2023-05-21 14:17:40 2023-05-21 14:18:55 2023-05-21 14:19:59 0:01:04 smithi stdin-killer centos 8.stream fs/fscrypt/{begin/{0-install 1-ceph 2-logrotate} bluestore-bitmap clusters/1-mds-1-client conf/{client mds mon osd} distro/{centos_latest} mount/kclient/{mount-syntax/v1 mount overrides/{distro/testing/k-testing}} overrides/{ignorelist_health ignorelist_health_more ignorelist_wrongly_marked_down pg-warn} tasks/fscrypt-dbench} 3
Failure Reason:

Error reimaging machines: Failed to power on smithi112

pass 7281778 2023-05-21 14:17:41 2023-05-21 14:18:55 2023-05-21 14:55:24 0:36:29 0:19:17 0:17:12 smithi stdin-killer ubuntu 22.04 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} msgr-failures/none objectstore-ec/bluestore-comp-ec-root overrides/{frag ignorelist_health ignorelist_wrongly_marked_down prefetch_dirfrags/no prefetch_entire_dirfrags/no races session_timeout thrashosds-health} ranks/3 tasks/{1-thrash/osd 2-workunit/suites/pjd}} 2
pass 7281779 2023-05-21 14:17:42 2023-05-21 14:18:56 2023-05-21 17:14:58 2:56:02 2:44:12 0:11:50 smithi stdin-killer rhel 8.6 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{rhel_8} mount/fuse msgr-failures/osd-mds-delay objectstore-ec/bluestore-comp overrides/{frag ignorelist_health ignorelist_wrongly_marked_down prefetch_dirfrags/yes prefetch_entire_dirfrags/yes races session_timeout thrashosds-health} ranks/5 tasks/{1-thrash/osd 2-workunit/fs/snaps}} 2
fail 7281780 2023-05-21 14:17:42 2023-05-21 14:19:06 2023-05-21 14:48:25 0:29:19 0:14:58 0:14:21 smithi stdin-killer ubuntu 22.04 fs/multiclient/{begin/{0-install 1-ceph 2-logrotate} clusters/1-mds-2-client conf/{client mds mon osd} distros/ubuntu_latest mount/fuse objectstore-ec/bluestore-comp overrides/{ignorelist_health ignorelist_wrongly_marked_down} tasks/mdtest} 4
Failure Reason:

Command failed on smithi148 with status 2: 'TESTDIR=/home/ubuntu/cephtest bash -s'

pass 7281781 2023-05-21 14:17:43 2023-05-21 14:19:07 2023-05-21 15:12:57 0:53:50 0:39:17 0:14:33 smithi stdin-killer rhel 8.6 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/secure wsync/yes} objectstore-ec/bluestore-bitmap omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{export-check n/5 replication/always} standby-replay tasks/{0-subvolume/{no-subvolume} 1-check-counter 2-scrub/yes 3-snaps/yes 4-flush/yes 5-workunit/suites/iozone}} 3
fail 7281782 2023-05-21 14:17:44 2023-05-21 14:19:07 2023-05-21 14:55:39 0:36:32 0:25:18 0:11:14 smithi stdin-killer rhel 8.6 fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{rhel_8} mount/fuse objectstore/bluestore-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/client-recovery} 2
Failure Reason:

Test failure: test_evicted_caps (tasks.cephfs.test_client_recovery.TestClientRecovery)

fail 7281783 2023-05-21 14:17:45 2023-05-21 14:19:08 2023-05-21 14:48:29 0:29:21 0:15:40 0:13:41 smithi stdin-killer centos 8.stream fs/fscrypt/{begin/{0-install 1-ceph 2-logrotate} bluestore-bitmap clusters/1-mds-1-client conf/{client mds mon osd} distro/{centos_latest} mount/kclient/{mount-syntax/v1 mount overrides/{distro/testing/k-testing}} overrides/{ignorelist_health ignorelist_health_more ignorelist_wrongly_marked_down pg-warn} tasks/fscrypt-ffsb} 3
Failure Reason:

Command failed (workunit test fs/fscrypt.sh) on smithi134 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=83caa3d7936db5043e7b3c387d3087f4af006726 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/fs/fscrypt.sh none ffsb'

dead 7281784 2023-05-21 14:17:46 2023-05-21 14:19:18 2023-05-21 14:30:08 0:10:50 smithi stdin-killer rhel 8.6 fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} objectstore/bluestore-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/data-scan} 2
Failure Reason:

Error reimaging machines: 'ssh_keyscan smithi153.front.sepia.ceph.com' reached maximum tries (5) after waiting for 5 seconds

fail 7281785 2023-05-21 14:17:47 2023-05-21 14:19:59 2023-05-21 15:17:30 0:57:31 0:42:34 0:14:57 smithi stdin-killer rhel 8.6 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/secure wsync/yes} objectstore-ec/bluestore-bitmap omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{export-check n/5 replication/default} standby-replay tasks/{0-subvolume/{with-quota} 1-check-counter 2-scrub/yes 3-snaps/yes 4-flush/yes 5-workunit/kernel_untar_build}} 3
Failure Reason:

Command failed (workunit test kernel_untar_build.sh) on smithi047 with status 2: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=83caa3d7936db5043e7b3c387d3087f4af006726 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/kernel_untar_build.sh'

fail 7281786 2023-05-21 14:17:47 2023-05-21 14:19:59 2023-05-21 15:09:02 0:49:03 0:38:06 0:10:57 smithi stdin-killer rhel 8.6 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/fuse objectstore-ec/bluestore-comp-ec-root omap_limit/10 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{export-check n/5 replication/always} standby-replay tasks/{0-subvolume/{with-quota} 1-check-counter 2-scrub/no 3-snaps/no 4-flush/no 5-workunit/postgres}} 3
Failure Reason:

Command failed on smithi039 with status 1: "sudo TESTDIR=/home/ubuntu/cephtest bash -c 'sudo -u postgres -- pgbench -s 500 -i'"

pass 7281787 2023-05-21 14:17:48 2023-05-21 14:20:00 2023-05-21 15:03:39 0:43:39 0:32:18 0:11:21 smithi stdin-killer centos 8.stream fs/upgrade/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} fail_fs/no overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn syntax} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-inline/yes 4-verify} 2-client 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} 2
fail 7281788 2023-05-21 14:17:49 2023-05-21 14:20:00 2023-05-21 15:04:19 0:44:19 0:26:58 0:17:21 smithi stdin-killer ubuntu 20.04 fs/libcephfs/{begin/{0-install 1-ceph 2-logrotate} clusters/1-mds-1-client-coloc conf/{client mds mon osd} distro/{ubuntu_20.04} objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/libcephfs/{frag test}} 2
Failure Reason:

Command failed (workunit test libcephfs/test.sh) on smithi061 with status 134: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=83caa3d7936db5043e7b3c387d3087f4af006726 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/libcephfs/test.sh'

fail 7281789 2023-05-21 14:17:50 2023-05-21 14:20:01 2023-05-21 14:58:15 0:38:14 0:21:29 0:16:45 smithi stdin-killer ubuntu 22.04 fs/multiclient/{begin/{0-install 1-ceph 2-logrotate} clusters/1-mds-3-client conf/{client mds mon osd} distros/ubuntu_latest mount/fuse objectstore-ec/bluestore-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down} tasks/cephfs_misc_tests} 5
Failure Reason:

Test failure: test_cap_revoke_nonresponder (tasks.cephfs.test_misc.TestMisc)

pass 7281790 2023-05-21 14:17:51 2023-05-21 14:20:11 2023-05-21 16:08:48 1:48:37 1:32:57 0:15:40 smithi stdin-killer ubuntu 20.04 fs/snaps/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu_20.04} mount/fuse objectstore-ec/bluestore-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down} tasks/workunit/snaps} 2
fail 7281791 2023-05-21 14:17:52 2023-05-21 14:20:12 2023-05-21 15:18:21 0:58:09 0:41:23 0:16:46 smithi stdin-killer rhel 8.6 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/crc wsync/yes} objectstore-ec/bluestore-comp omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{export-check n/3 replication/default} standby-replay tasks/{0-subvolume/{with-namespace-isolated-and-quota} 1-check-counter 2-scrub/yes 3-snaps/yes 4-flush/yes 5-workunit/suites/blogbench}} 3
Failure Reason:

error during scrub thrashing: rank damage found: {'backtrace'}

fail 7281792 2023-05-21 14:17:53 2023-05-21 14:23:02 2023-05-21 17:59:21 3:36:19 3:25:32 0:10:47 smithi stdin-killer rhel 8.6 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} ms_mode/legacy wsync/no} objectstore-ec/bluestore-ec-root omap_limit/10 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/1 standby-replay tasks/{0-subvolume/{no-subvolume} 1-check-counter 2-scrub/no 3-snaps/no 4-flush/no 5-workunit/suites/dbench}} 3
Failure Reason:

Command failed (workunit test suites/dbench.sh) on smithi073 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=83caa3d7936db5043e7b3c387d3087f4af006726 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/dbench.sh'

fail 7281793 2023-05-21 14:17:54 2023-05-21 14:26:28 2023-05-21 19:01:48 4:35:20 4:21:01 0:14:19 smithi stdin-killer rhel 8.6 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/secure wsync/yes} objectstore-ec/bluestore-bitmap omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{export-check n/3 replication/always} standby-replay tasks/{0-subvolume/{with-namespace-isolated-and-quota} 1-check-counter 2-scrub/yes 3-snaps/yes 4-flush/yes 5-workunit/suites/ffsb}} 3
Failure Reason:

Command failed (workunit test suites/ffsb.sh) on smithi003 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=83caa3d7936db5043e7b3c387d3087f4af006726 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/ffsb.sh'

fail 7281794 2023-05-21 14:17:54 2023-05-21 14:29:29 2023-05-21 19:01:48 4:32:19 4:21:14 0:11:05 smithi stdin-killer rhel 8.6 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} msgr-failures/none objectstore-ec/bluestore-ec-root overrides/{frag ignorelist_health ignorelist_wrongly_marked_down prefetch_dirfrags/yes prefetch_entire_dirfrags/no races session_timeout thrashosds-health} ranks/3 tasks/{1-thrash/mon 2-workunit/suites/pjd}} 2
fail 7281795 2023-05-21 14:17:55 2023-05-21 14:30:20 2023-05-21 18:11:21 3:41:01 3:28:28 0:12:33 smithi stdin-killer rhel 8.6 fs/verify/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{k-testing mount ms-die-on-skipped} objectstore-ec/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down mon-debug session_timeout} ranks/1 tasks/dbench validater/valgrind} 2
Failure Reason:

Command failed (workunit test suites/dbench.sh) on smithi163 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=83caa3d7936db5043e7b3c387d3087f4af006726 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/dbench.sh'

fail 7281796 2023-05-21 14:17:56 2023-05-21 14:31:21 2023-05-21 19:00:29 4:29:08 4:20:42 0:08:26 smithi stdin-killer rhel 8.6 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/crc wsync/yes} objectstore-ec/bluestore-comp omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{export-check n/5 replication/always} standby-replay tasks/{0-subvolume/{with-namespace-isolated} 1-check-counter 2-scrub/yes 3-snaps/yes 4-flush/yes 5-workunit/suites/fsstress}} 3
Failure Reason:

Cannot connect to remote host smithi026

fail 7281797 2023-05-21 14:17:57 2023-05-21 14:31:31 2023-05-21 14:58:18 0:26:47 0:14:38 0:12:09 smithi stdin-killer rhel 8.6 fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{rhel_8} mount/fuse objectstore/bluestore-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/openfiletable} 2
Failure Reason:

Test failure: test_max_items_per_obj (tasks.cephfs.test_openfiletable.OpenFileTable)

fail 7281798 2023-05-21 14:17:58 2023-05-21 14:36:54 2023-05-21 18:10:47 3:33:53 3:23:09 0:10:44 smithi stdin-killer rhel 8.6 fs/mixed-clients/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-2c-client conf/{client mds mon osd} distro/{rhel_8} kclient-overrides/{distro/testing/k-testing ms-die-on-skipped} objectstore-ec/bluestore-comp-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down osd-asserts} tasks/kernel_cfuse_workunits_dbench_iozone} 2
Failure Reason:

Command failed (workunit test suites/dbench.sh) on smithi101 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && cd -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=83caa3d7936db5043e7b3c387d3087f4af006726 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="1" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.1 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.1 CEPH_MNT=/home/ubuntu/cephtest/mnt.1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.1/qa/workunits/suites/dbench.sh'

fail 7281799 2023-05-21 14:17:58 2023-05-21 14:37:25 2023-05-21 15:04:35 0:27:10 0:14:14 0:12:56 smithi stdin-killer ubuntu 22.04 fs/multiclient/{begin/{0-install 1-ceph 2-logrotate} clusters/1-mds-2-client conf/{client mds mon osd} distros/ubuntu_latest mount/fuse objectstore-ec/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down} tasks/ior-shared-file} 4
Failure Reason:

Command failed on smithi183 with status 2: 'TESTDIR=/home/ubuntu/cephtest bash -s'

pass 7281800 2023-05-21 14:17:59 2023-05-21 14:40:47 2023-05-21 15:23:06 0:42:19 0:30:46 0:11:33 smithi stdin-killer centos 8.stream fs/upgrade/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} fail_fs/no overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn syntax} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-inline/yes 4-verify} 2-client 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} 2
fail 7281801 2023-05-21 14:18:00 2023-05-21 14:41:47 2023-05-21 15:10:20 0:28:33 0:15:57 0:12:36 smithi stdin-killer centos 8.stream fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} objectstore/bluestore-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/quota} 2
Failure Reason:

Test failure: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)

fail 7281802 2023-05-21 14:18:01 2023-05-21 14:42:38 2023-05-21 15:07:11 0:24:33 0:13:18 0:11:15 smithi stdin-killer ubuntu 20.04 fs/libcephfs/{begin/{0-install 1-ceph 2-logrotate} clusters/1-mds-1-client-coloc conf/{client mds mon osd} distro/{ubuntu_20.04} objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/libcephfs_python} 2
Failure Reason:

Command failed (workunit test fs/test_python.sh) on smithi099 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=83caa3d7936db5043e7b3c387d3087f4af006726 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/fs/test_python.sh'

fail 7281803 2023-05-21 14:18:02 2023-05-21 14:42:59 2023-05-21 19:00:58 4:17:59 4:03:59 0:14:00 smithi stdin-killer centos 8.stream fs/upgrade/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} fail_fs/yes overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn syntax} roles tasks/{0-from/pacific 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-inline/no 4-verify} 2-client 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} 2
Failure Reason:

Cannot connect to remote host smithi055

pass 7281804 2023-05-21 14:18:02 2023-05-21 14:43:39 2023-05-21 15:34:13 0:50:34 0:38:46 0:11:48 smithi stdin-killer rhel 8.6 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/secure wsync/yes} objectstore-ec/bluestore-bitmap omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{export-check n/3 replication/default} standby-replay tasks/{0-subvolume/{with-namespace-isolated-and-quota} 1-check-counter 2-scrub/yes 3-snaps/yes 4-flush/yes 5-workunit/suites/pjd}} 3