Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
pass 7337666 2023-07-14 13:58:42 2023-07-14 15:45:41 2023-07-14 16:16:30 0:30:49 0:18:56 0:11:53 smithi main centos 9.stream rbd/nbd/{base/install cluster/{fixed-3 openstack} msgr-failures/few objectstore/bluestore-low-osd-mem-target supported-random-distro$/{centos_latest} thrashers/cache thrashosds-health workloads/rbd_fsx_nbd} 3
pass 7337668 2023-07-14 13:58:43 2023-07-14 15:47:12 2023-07-14 17:17:30 1:30:18 1:17:49 0:12:29 smithi main centos 8.stream rbd/pwl-cache/tmpfs/{1-base/install 2-cluster/{fix-2 openstack} 3-supported-random-distro$/{centos_8} 4-cache-path 5-cache-mode/ssd 6-cache-size/1G 7-workloads/qemu_xfstests} 2
pass 7337670 2023-07-14 13:58:44 2023-07-14 15:49:48 2023-07-14 16:33:58 0:44:10 0:34:35 0:09:35 smithi main rhel 8.6 rbd/librbd/{cache/writeback clusters/{fixed-3 openstack} config/none min-compat-client/octopus msgr-failures/few objectstore/bluestore-comp-lz4 pool/replicated-data-pool supported-random-distro$/{rhel_8} workloads/python_api_tests_with_defaults} 3
pass 7337671 2023-07-14 13:58:44 2023-07-14 15:50:16 2023-07-14 16:31:53 0:41:37 0:25:22 0:16:15 smithi main centos 9.stream rbd/thrash/{base/install clusters/{fixed-2 openstack} msgr-failures/few objectstore/bluestore-comp-zstd supported-random-distro$/{centos_latest} thrashers/default thrashosds-health workloads/rbd_fsx_cache_writeback} 2
pass 7337673 2023-07-14 13:58:45 2023-07-14 15:57:08 2023-07-14 17:22:58 1:25:50 1:11:14 0:14:36 smithi main centos 8.stream rbd/immutable-object-cache/{clusters/{fix-2 openstack} pool/ceph_and_immutable_object_cache supported-random-distro$/{centos_8} workloads/qemu_on_immutable_object_cache_and_thrash} 2
pass 7337677 2023-07-14 13:58:46 2023-07-14 17:34:15 5189 smithi main rhel 8.6 rbd/maintenance/{base/install clusters/{fixed-3 openstack} objectstore/bluestore-bitmap qemu/xfstests supported-random-distro$/{rhel_8} workloads/rebuild_object_map} 3
pass 7337680 2023-07-14 13:58:47 2023-07-14 15:59:59 2023-07-14 17:36:02 1:36:03 1:21:48 0:14:15 smithi main centos 9.stream rbd/migration/{1-base/install 2-clusters/{fixed-3 openstack} 3-objectstore/bluestore-comp-zstd 4-supported-random-distro$/{centos_latest} 5-pool/none 6-prepare/qcow2-file 7-io-workloads/qemu_xfstests 8-migrate-workloads/execute 9-cleanup/cleanup} 3
pass 7337683 2023-07-14 13:58:48 2023-07-14 16:02:45 2023-07-14 17:07:19 1:04:34 0:54:18 0:10:16 smithi main rhel 8.6 rbd/qemu/{cache/none clusters/{fixed-3 openstack} features/journaling msgr-failures/few objectstore/bluestore-comp-zstd pool/ec-data-pool supported-random-distro$/{rhel_8} workloads/qemu_bonnie} 3
pass 7337686 2023-07-14 13:58:49 2023-07-14 16:31:53 1069 smithi main rhel 8.6 rbd/basic/{base/install cachepool/small clusters/{fixed-1 openstack} msgr-failures/few objectstore/bluestore-comp-zstd supported-random-distro$/{rhel_8} tasks/rbd_lock_and_fence} 1
pass 7337688 2023-07-14 13:58:50 2023-07-14 16:06:06 2023-07-14 16:59:03 0:52:57 0:40:06 0:12:51 smithi main centos 8.stream rbd/mirror/{base/install clients/{mirror-extra mirror} cluster/{2-node openstack} msgr-failures/few objectstore/bluestore-comp-zlib supported-random-distro$/{centos_8} workloads/rbd-mirror-snapshot-workunit-minimum} 2
fail 7337692 2023-07-14 13:58:51 2023-07-14 16:24:52 2023-07-14 17:29:47 1:04:55 0:52:15 0:12:40 smithi main centos 9.stream rbd/valgrind/{base/install centos_latest clusters/{fixed-1 openstack} objectstore/bluestore-low-osd-mem-target validator/memcheck workloads/python_api_tests} 1
Failure Reason:

Command failed (workunit test rbd/test_librbd_python.sh) on smithi157 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=e3289f744b55c911f5d9b695cc2dfaa7044d8c97 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 RBD_FEATURES=1 VALGRIND=\'--tool=memcheck --leak-check=full\' adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rbd/test_librbd_python.sh'

pass 7337694 2023-07-14 13:58:52 2023-07-14 16:25:55 2023-07-14 21:47:07 5:21:12 5:06:32 0:14:40 smithi main ubuntu 20.04 rbd/encryption/{cache/none clusters/{fixed-3 openstack} features/defaults msgr-failures/few objectstore/bluestore-comp-zlib pool/small-cache-pool supported-random-distro$/{ubuntu_20.04} workloads/qemu_xfstests_none_luks2} 3
pass 7337697 2023-07-14 13:58:53 2023-07-14 16:30:32 2023-07-14 16:59:13 0:28:41 0:18:22 0:10:19 smithi main ubuntu 22.04 rbd/cli/{base/install clusters/{fixed-1 openstack} features/journaling msgr-failures/few objectstore/bluestore-comp-zlib pool/replicated-data-pool supported-random-distro$/{ubuntu_latest} workloads/rbd_cli_luks_encryption} 1
dead 7337701 2023-07-14 13:58:54 2023-07-14 16:32:21 2023-07-14 16:33:35 0:01:14 smithi main centos 9.stream rbd/thrash/{base/install clusters/{fixed-2 openstack} msgr-failures/few objectstore/bluestore-hybrid supported-random-distro$/{centos_latest} thrashers/cache thrashosds-health workloads/rbd_fsx_cache_writethrough} 2
Failure Reason:

Error reimaging machines: Failed to power on smithi195

pass 7337703 2023-07-14 13:58:55 2023-07-14 16:33:08 2023-07-14 17:21:49 0:48:41 0:37:15 0:11:26 smithi main ubuntu 22.04 rbd/cli_v1/{base/install clusters/{fixed-1 openstack} features/format-1 msgr-failures/few objectstore/bluestore-hybrid pool/none supported-random-distro$/{ubuntu_latest} workloads/rbd_cli_generic} 1
pass 7337706 2023-07-14 13:58:56 2023-07-14 16:34:46 2023-07-14 17:46:15 1:11:29 1:04:03 0:07:26 smithi main rhel 8.6 rbd/mirror-thrash/{base/install clients/mirror cluster/{2-node openstack} msgr-failures/few objectstore/bluestore-stupid policy/simple rbd-mirror/four-per-cluster supported-random-distro$/{rhel_8} workloads/rbd-mirror-journal-stress-workunit} 2
pass 7337709 2023-07-14 13:58:57 2023-07-14 16:34:47 2023-07-14 17:07:22 0:32:35 0:19:19 0:13:16 smithi main centos 9.stream rbd/nbd/{base/install cluster/{fixed-3 openstack} msgr-failures/few objectstore/bluestore-stupid supported-random-distro$/{centos_latest} thrashers/default thrashosds-health workloads/rbd_nbd} 3
pass 7337712 2023-07-14 13:58:58 2023-07-14 16:38:18 2023-07-14 19:19:35 2:41:17 2:27:51 0:13:26 smithi main ubuntu 22.04 rbd/maintenance/{base/install clusters/{fixed-3 openstack} objectstore/bluestore-comp-lz4 qemu/xfstests supported-random-distro$/{ubuntu_latest} workloads/dynamic_features} 3
pass 7337715 2023-07-14 13:58:59 2023-07-14 16:40:38 2023-07-14 17:35:29 0:54:51 0:39:54 0:14:57 smithi main ubuntu 20.04 rbd/pwl-cache/home/{1-base/install 2-cluster/{fix-2 openstack} 3-supported-random-distro$/{ubuntu_20.04} 4-cache-path 5-cache-mode/rwl 6-cache-size/8G 7-workloads/recovery} 2
dead 7337718 2023-07-14 13:58:59 2023-07-14 16:42:46 2023-07-14 16:46:30 0:03:44 smithi main rhel 8.6 rbd/singleton/{all/rbd_mirror objectstore/bluestore-hybrid openstack supported-random-distro$/{rhel_8}} 1
Failure Reason:

Error reimaging machines: Failed to power on smithi153

fail 7337721 2023-07-14 13:59:00 2023-07-14 16:44:29 2023-07-14 17:36:17 0:51:48 0:38:44 0:13:04 smithi main centos 9.stream rbd/valgrind/{base/install centos_latest clusters/{fixed-1 openstack} objectstore/bluestore-stupid validator/memcheck workloads/python_api_tests_with_defaults} 1
Failure Reason:

Command failed (workunit test rbd/test_librbd_python.sh) on smithi171 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=e3289f744b55c911f5d9b695cc2dfaa7044d8c97 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 RBD_FEATURES=61 VALGRIND=\'--tool=memcheck --leak-check=full\' adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rbd/test_librbd_python.sh'

dead 7337724 2023-07-14 13:59:01 2023-07-14 19:44:03 2023-07-14 19:52:14 0:08:11 smithi main ubuntu 22.04 rbd/cli/{base/install clusters/{fixed-1 openstack} features/layering msgr-failures/few objectstore/bluestore-comp-zstd pool/small-cache-pool supported-random-distro$/{ubuntu_latest} workloads/rbd_cli_migration} 1
Failure Reason:

Error reimaging machines: Cannot connect to remote host smithi190

dead 7337727 2023-07-14 13:59:02 2023-07-14 19:45:51 2023-07-14 19:49:29 0:03:38 smithi main ubuntu 20.04 rbd/maintenance/{base/install clusters/{fixed-3 openstack} objectstore/bluestore-comp-snappy qemu/xfstests supported-random-distro$/{ubuntu_20.04} workloads/dynamic_features_no_cache} 3
Failure Reason:

Error reimaging machines: Expected smithi179's OS to be ubuntu 20.04 but found ubuntu 22.04

dead 7337730 2023-07-14 13:59:03 2023-07-14 19:47:30 2023-07-14 19:51:11 0:03:41 smithi main ubuntu 20.04 rbd/mirror-thrash/{base/install clients/mirror cluster/{2-node openstack} msgr-failures/few objectstore/bluestore-bitmap policy/none rbd-mirror/four-per-cluster supported-random-distro$/{ubuntu_20.04} workloads/rbd-mirror-journal-workunit} 2
Failure Reason:

Error reimaging machines: Expected smithi114's OS to be ubuntu 20.04 but found ubuntu 22.04

fail 7337733 2023-07-14 13:59:04 2023-07-14 19:48:52 2023-07-14 23:41:13 3:52:21 3:35:44 0:16:37 smithi main centos 9.stream rbd/nbd/{base/install cluster/{fixed-3 openstack} msgr-failures/few objectstore/bluestore-bitmap supported-random-distro$/{centos_latest} thrashers/cache thrashosds-health workloads/rbd_nbd_diff_continuous} 3
Failure Reason:

Command failed (workunit test rbd/diff_continuous.sh) on smithi134 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=e3289f744b55c911f5d9b695cc2dfaa7044d8c97 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 RBD_DEVICE_TYPE=nbd adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rbd/diff_continuous.sh'

pass 7337736 2023-07-14 13:59:05 2023-07-14 20:47:05 2696 smithi main ubuntu 22.04 rbd/basic/{base/install cachepool/small clusters/{fixed-1 openstack} msgr-failures/few objectstore/bluestore-low-osd-mem-target supported-random-distro$/{ubuntu_latest} tasks/rbd_api_tests_old_format} 1
fail 7337739 2023-07-14 13:59:06 2023-07-14 19:51:11 2023-07-14 21:30:12 1:39:01 1:26:54 0:12:07 smithi main centos 9.stream rbd/valgrind/{base/install centos_latest clusters/{fixed-1 openstack} objectstore/bluestore-bitmap validator/memcheck workloads/python_api_tests_with_journaling} 1
Failure Reason:

Command failed (workunit test rbd/test_librbd_python.sh) on smithi082 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=e3289f744b55c911f5d9b695cc2dfaa7044d8c97 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 RBD_FEATURES=125 VALGRIND=\'--tool=memcheck --leak-check=full\' adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rbd/test_librbd_python.sh'

dead 7337742 2023-07-14 13:59:07 2023-07-14 19:52:48 2023-07-14 19:59:22 0:06:34 smithi main rhel 8.6 rbd/thrash/{base/install clusters/{fixed-2 openstack} msgr-failures/few objectstore/bluestore-comp-lz4 supported-random-distro$/{rhel_8} thrashers/cache thrashosds-health workloads/rbd_fsx_nocache} 2
Failure Reason:

Error reimaging machines: Expected smithi016's OS to be rhel 8.6 but found centos 8

pass 7337745 2023-07-14 13:59:08 2023-07-14 19:53:08 2023-07-14 21:39:39 1:46:31 1:36:04 0:10:27 smithi main rhel 8.6 rbd/pwl-cache/home/{1-base/install 2-cluster/{fix-2 openstack} 3-supported-random-distro$/{rhel_8} 4-cache-path 5-cache-mode/rwl 6-cache-size/1G 7-workloads/c_api_tests_with_defaults} 2
fail 7337749 2023-07-14 13:59:09 2023-07-14 22:06:50 7170 smithi main centos 9.stream rbd/valgrind/{base/install centos_latest clusters/{fixed-1 openstack} objectstore/bluestore-comp-lz4 validator/memcheck workloads/rbd_mirror} 1
Failure Reason:

Command failed (workunit test rbd/test_rbd_mirror.sh) on smithi116 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=e3289f744b55c911f5d9b695cc2dfaa7044d8c97 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 VALGRIND=\'--tool=memcheck --leak-check=full\' adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rbd/test_rbd_mirror.sh'

pass 7337752 2023-07-14 13:59:10 2023-07-14 19:55:06 2023-07-14 20:40:52 0:45:46 0:35:42 0:10:04 smithi main ubuntu 22.04 rbd/cli/{base/install clusters/{fixed-1 openstack} features/journaling msgr-failures/few objectstore/bluestore-low-osd-mem-target pool/none supported-random-distro$/{ubuntu_latest} workloads/rbd_cli_groups} 1
pass 7337755 2023-07-14 13:59:11 2023-07-14 19:55:46 2023-07-14 21:20:57 1:25:11 1:12:10 0:13:01 smithi main centos 9.stream rbd/mirror-thrash/{base/install clients/mirror cluster/{2-node openstack} msgr-failures/few objectstore/bluestore-comp-snappy policy/none rbd-mirror/four-per-cluster supported-random-distro$/{centos_latest} workloads/rbd-mirror-snapshot-stress-workunit-fast-diff} 2
pass 7337757 2023-07-14 13:59:12 2023-07-14 19:55:52 2023-07-14 20:54:19 0:58:27 0:42:49 0:15:38 smithi main ubuntu 20.04 rbd/mirror/{base/install clients/{mirror-extra mirror} cluster/{2-node openstack} msgr-failures/few objectstore/bluestore-bitmap supported-random-distro$/{ubuntu_20.04} workloads/rbd-mirror-workunit-policy-simple} 2
fail 7337760 2023-07-14 13:59:12 2023-07-14 20:00:53 2023-07-14 22:40:41 2:39:48 2:24:44 0:15:04 smithi main centos 9.stream rbd/valgrind/{base/install centos_latest clusters/{fixed-1 openstack} objectstore/bluestore-comp-lz4 validator/memcheck workloads/c_api_tests} 1
Failure Reason:

Command failed (workunit test rbd/test_librbd.sh) on smithi090 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=e3289f744b55c911f5d9b695cc2dfaa7044d8c97 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 RBD_FEATURES=1 VALGRIND=\'--tool=memcheck --leak-check=full\' adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rbd/test_librbd.sh'

fail 7337763 2023-07-14 13:59:13 2023-07-14 20:06:10 2023-07-14 22:39:49 2:33:39 2:22:05 0:11:34 smithi main centos 9.stream rbd/valgrind/{base/install centos_latest clusters/{fixed-1 openstack} objectstore/bluestore-comp-snappy validator/memcheck workloads/c_api_tests_with_defaults} 1
Failure Reason:

Command failed (workunit test rbd/test_librbd.sh) on smithi007 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=e3289f744b55c911f5d9b695cc2dfaa7044d8c97 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 RBD_FEATURES=61 VALGRIND=\'--tool=memcheck --leak-check=full\' adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rbd/test_librbd.sh'

fail 7337766 2023-07-14 13:59:14 2023-07-14 20:09:57 2023-07-14 22:09:27 1:59:30 1:45:01 0:14:29 smithi main centos 9.stream rbd/valgrind/{base/install centos_latest clusters/{fixed-1 openstack} objectstore/bluestore-comp-zlib validator/memcheck workloads/c_api_tests_with_journaling} 1
Failure Reason:

Command failed (workunit test rbd/test_librbd.sh) on smithi073 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=e3289f744b55c911f5d9b695cc2dfaa7044d8c97 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 RBD_FEATURES=125 VALGRIND=\'--tool=memcheck --leak-check=full\' adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rbd/test_librbd.sh'

pass 7337769 2023-07-14 13:59:15 2023-07-14 20:20:25 2023-07-14 21:20:17 0:59:52 0:49:14 0:10:38 smithi main centos 9.stream rbd/mirror-thrash/{base/install clients/mirror cluster/{2-node openstack} msgr-failures/few objectstore/bluestore-low-osd-mem-target policy/none rbd-mirror/four-per-cluster supported-random-distro$/{centos_latest} workloads/rbd-mirror-journal-workunit} 2
fail 7337772 2023-07-14 13:59:16 2023-07-14 20:23:04 2023-07-15 00:04:27 3:41:23 3:25:45 0:15:38 smithi main rhel 8.6 rbd/nbd/{base/install cluster/{fixed-3 openstack} msgr-failures/few objectstore/bluestore-low-osd-mem-target supported-random-distro$/{rhel_8} thrashers/cache thrashosds-health workloads/rbd_nbd_diff_continuous} 3
Failure Reason:

Command failed (workunit test rbd/diff_continuous.sh) on smithi191 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=e3289f744b55c911f5d9b695cc2dfaa7044d8c97 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 RBD_DEVICE_TYPE=nbd adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rbd/diff_continuous.sh'

fail 7337775 2023-07-14 13:59:17 2023-07-14 20:28:42 2023-07-14 21:11:17 0:42:35 0:23:47 0:18:48 smithi main centos 9.stream rbd/librbd/{cache/writeback clusters/{fixed-3 openstack} config/copy-on-read min-compat-client/octopus msgr-failures/few objectstore/bluestore-bitmap pool/replicated-data-pool supported-random-distro$/{centos_latest} workloads/python_api_tests_with_defaults} 3
Failure Reason:

Command failed (workunit test rbd/test_librbd_python.sh) on smithi196 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=e3289f744b55c911f5d9b695cc2dfaa7044d8c97 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 RBD_FEATURES=61 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rbd/test_librbd_python.sh'

fail 7337778 2023-07-14 13:59:18 2023-07-14 20:33:10 2023-07-14 21:58:12 1:25:02 1:11:50 0:13:12 smithi main centos 9.stream rbd/valgrind/{base/install centos_latest clusters/{fixed-1 openstack} objectstore/bluestore-hybrid validator/memcheck workloads/python_api_tests} 1
Failure Reason:

Command failed (workunit test rbd/test_librbd_python.sh) on smithi079 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=e3289f744b55c911f5d9b695cc2dfaa7044d8c97 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 RBD_FEATURES=1 VALGRIND=\'--tool=memcheck --leak-check=full\' adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rbd/test_librbd_python.sh'

dead 7337781 2023-07-14 13:59:19 2023-07-14 20:34:28 2023-07-15 08:47:22 12:12:54 smithi main centos 8.stream rbd/mirror-thrash/{base/install clients/mirror cluster/{2-node openstack} msgr-failures/few objectstore/bluestore-stupid policy/simple rbd-mirror/four-per-cluster supported-random-distro$/{centos_8} workloads/rbd-mirror-snapshot-stress-workunit-exclusive-lock} 2
Failure Reason:

hit max job timeout