Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
pass 7262027 2023-05-04 16:40:20 2023-05-04 16:41:06 2023-05-04 18:23:32 1:42:26 1:28:24 0:14:02 smithi main centos 8.stream rbd/migration/{1-base/install 2-clusters/{fixed-3 openstack} 3-objectstore/bluestore-low-osd-mem-target 4-supported-random-distro$/{centos_8} 5-pool/none 6-prepare/raw-file 7-io-workloads/qemu_xfstests 8-migrate-workloads/execute 9-cleanup/cleanup} 3
fail 7262028 2023-05-04 16:40:21 2023-05-04 16:41:06 2023-05-04 20:22:12 3:41:06 3:34:11 0:06:55 smithi main centos 8.stream rbd/cli_v1/{base/install clusters/{fixed-1 openstack} features/format-1 msgr-failures/few objectstore/bluestore-stupid pool/small-cache-pool supported-random-distro$/{centos_8} workloads/rbd_cli_generic} 1
Failure Reason:

Command failed (workunit test rbd/cli_generic.sh) on smithi062 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=2cc327ab03de508c4ed32f598c61221f937ffba0 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rbd/cli_generic.sh'

pass 7262029 2023-05-04 16:40:21 2023-05-04 16:41:07 2023-05-04 18:27:42 1:46:35 1:37:27 0:09:08 smithi main centos 8.stream rbd/encryption/{cache/writeback clusters/{fixed-3 openstack} features/readbalance msgr-failures/few objectstore/bluestore-comp-zlib pool/ec-cache-pool supported-random-distro$/{centos_8} workloads/qemu_xfstests_luks1} 3
dead 7262030 2023-05-04 16:40:22 2023-05-04 16:41:07 2023-05-05 04:54:08 12:13:01 smithi main centos 8.stream rbd/mirror-thrash/{base/install clients/mirror cluster/{2-node openstack} msgr-failures/few objectstore/bluestore-comp-zlib policy/none rbd-mirror/four-per-cluster supported-random-distro$/{centos_8} workloads/rbd-mirror-fsx-workunit} 2
Failure Reason:

hit max job timeout

pass 7262031 2023-05-04 16:40:23 2023-05-04 16:41:07 2023-05-04 18:42:21 2:01:14 1:52:58 0:08:16 smithi main centos 8.stream rbd/qemu/{cache/none clusters/{fixed-3 openstack} features/defaults msgr-failures/few objectstore/bluestore-hybrid pool/none supported-random-distro$/{centos_8} workloads/qemu_xfstests} 3
pass 7262032 2023-05-04 16:40:24 2023-05-04 16:41:08 2023-05-04 17:24:14 0:43:06 0:33:37 0:09:29 smithi main centos 8.stream rbd/qemu/{cache/writearound clusters/{fixed-3 openstack} features/journaling msgr-failures/few objectstore/bluestore-low-osd-mem-target pool/replicated-data-pool supported-random-distro$/{centos_8} workloads/qemu_bonnie} 3
pass 7262033 2023-05-04 16:40:24 2023-05-04 16:41:08 2023-05-04 17:15:22 0:34:14 0:25:43 0:08:31 smithi main centos 8.stream rbd/immutable-object-cache/{clusters/{fix-2 openstack} pool/ceph_and_immutable_object_cache supported-random-distro$/{centos_8} workloads/fio_on_immutable_object_cache} 2
pass 7262034 2023-05-04 16:40:25 2023-05-04 16:41:08 2023-05-04 17:14:24 0:33:16 0:20:14 0:13:02 smithi main centos 8.stream rbd/nbd/{base/install cluster/{fixed-3 openstack} msgr-failures/few objectstore/bluestore-comp-zstd supported-random-distro$/{centos_8} thrashers/cache thrashosds-health workloads/rbd_fsx_nbd} 3
fail 7262035 2023-05-04 16:40:26 2023-05-04 16:41:09 2023-05-04 17:53:06 1:11:57 1:04:24 0:07:33 smithi main rhel 8.4 rbd/pwl-cache/tmpfs/{1-base/install 2-cluster/{fix-2 openstack} 3-supported-random-distro$/{rhel_8} 4-cache-path 5-cache-mode/ssd 6-cache-size/5G 7-workloads/qemu_xfstests} 2
Failure Reason:

Command failed on smithi176 with status 1: 'test -f /home/ubuntu/cephtest/archive/qemu/client.0/success'

pass 7262036 2023-05-04 16:40:27 2023-05-04 16:41:09 2023-05-04 17:26:16 0:45:07 0:37:12 0:07:55 smithi main centos 8.stream rbd/thrash/{base/install clusters/{fixed-2 openstack} msgr-failures/few objectstore/bluestore-bitmap supported-random-distro$/{centos_8} thrashers/default thrashosds-health workloads/rbd_api_tests_no_locking} 2
dead 7262037 2023-05-04 16:40:27 2023-05-04 16:41:09 2023-05-04 17:04:39 0:23:30 smithi main centos 8.stream rbd/nbd/{base/install cluster/{fixed-3 openstack} msgr-failures/few objectstore/bluestore-hybrid supported-random-distro$/{centos_8} thrashers/default thrashosds-health workloads/rbd_nbd} 3
dead 7262038 2023-05-04 16:40:28 2023-05-04 16:41:10 2023-05-04 16:54:41 0:13:31 0:04:03 0:09:28 smithi main rhel 8.4 rbd/cli/{base/install clusters/{fixed-1 openstack} features/layering msgr-failures/few objectstore/filestore-xfs pool/small-cache-pool supported-random-distro$/{rhel_8} workloads/rbd_cli_generic} 1
Failure Reason:

{'smithi163.front.sepia.ceph.com': {'_ansible_no_log': False, 'changed': False, 'cmd': "subscription-manager release --list | grep -E '[0-9]'", 'delta': '0:00:00.410184', 'end': '2023-05-04 16:52:57.852574', 'failed_when_result': True, 'invocation': {'module_args': {'_raw_params': "subscription-manager release --list | grep -E '[0-9]'", '_uses_shell': True, 'argv': None, 'chdir': None, 'creates': None, 'executable': None, 'removes': None, 'stdin': None, 'stdin_add_newline': True, 'strip_empty_ends': True, 'warn': True}}, 'msg': 'non-zero return code', 'rc': 1, 'start': '2023-05-04 16:52:57.442390', 'stderr': 'Network error. Please check the connection details, or see /var/log/rhsm/rhsm.log for more information.', 'stderr_lines': ['Network error. Please check the connection details, or see /var/log/rhsm/rhsm.log for more information.'], 'stdout': '', 'stdout_lines': []}}

dead 7262039 2023-05-04 16:40:29 2023-05-04 16:41:10 2023-05-04 17:00:04 0:18:54 0:06:38 0:12:16 smithi main centos 8.stream rbd/maintenance/{base/install clusters/{fixed-3 openstack} objectstore/bluestore-hybrid qemu/xfstests supported-random-distro$/{centos_8} workloads/dynamic_features} 3
Failure Reason:

{'smithi184.front.sepia.ceph.com': {'_ansible_no_log': False, 'changed': True, 'cmd': ['semodule', '-i', '/tmp/nrpe.pp'], 'delta': '0:00:05.007779', 'end': '2023-05-04 16:57:57.781073', 'invocation': {'module_args': {'_raw_params': 'semodule -i /tmp/nrpe.pp', '_uses_shell': False, 'argv': None, 'chdir': None, 'creates': None, 'executable': None, 'removes': None, 'stdin': None, 'stdin_add_newline': True, 'strip_empty_ends': True, 'warn': True}}, 'msg': 'non-zero return code', 'rc': 1, 'start': '2023-05-04 16:57:52.773294', 'stderr': 'libsemanage.semanage_get_lock: Could not get direct transaction lock at /var/lib/selinux/targeted/semanage.trans.LOCK. (Resource temporarily unavailable).\nsemodule: Failed on /tmp/nrpe.pp!', 'stderr_lines': ['libsemanage.semanage_get_lock: Could not get direct transaction lock at /var/lib/selinux/targeted/semanage.trans.LOCK. (Resource temporarily unavailable).', 'semodule: Failed on /tmp/nrpe.pp!'], 'stdout': '', 'stdout_lines': []}}