Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
fail 7409812 2023-10-03 19:52:46 2023-10-03 19:53:29 2023-10-03 23:30:56 3:37:27 3:26:01 0:11:26 smithi main centos 8.stream rbd:cli/{base/install clusters/{fixed-1 openstack} conf/{disable-pool-app} features/layering msgr-failures/few objectstore/bluestore-hybrid pool/none supported-random-distro$/{centos_8} workloads/rbd_support_module_recovery} 1
Failure Reason:

Command failed (workunit test rbd/rbd_support_module_recovery.sh) on smithi114 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=7fa2fe7fd78eee9617aa81ca1e9d3def5c5ca231 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rbd/rbd_support_module_recovery.sh'

fail 7409813 2023-10-03 19:52:46 2023-10-03 19:53:30 2023-10-03 23:27:00 3:33:30 3:22:48 0:10:42 smithi main centos 8.stream rbd:cli/{base/install clusters/{fixed-1 openstack} conf/{disable-pool-app} features/layering msgr-failures/few objectstore/bluestore-comp-zlib pool/small-cache-pool supported-random-distro$/{centos_8} workloads/rbd_support_module_recovery} 1
Failure Reason:

Command failed (workunit test rbd/rbd_support_module_recovery.sh) on smithi139 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=7fa2fe7fd78eee9617aa81ca1e9d3def5c5ca231 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rbd/rbd_support_module_recovery.sh'

pass 7409814 2023-10-03 19:52:47 2023-10-03 19:53:30 2023-10-03 22:00:03 2:06:33 1:58:03 0:08:30 smithi main centos 9.stream rbd:cli/{base/install clusters/{fixed-1 openstack} conf/{disable-pool-app} features/layering msgr-failures/few objectstore/bluestore-comp-lz4 pool/none supported-random-distro$/{centos_latest} workloads/rbd_support_module_recovery} 1
pass 7409815 2023-10-03 19:52:48 2023-10-03 22:07:29 7399 smithi main ubuntu 22.04 rbd:cli/{base/install clusters/{fixed-1 openstack} conf/{disable-pool-app} features/layering msgr-failures/few objectstore/bluestore-stupid pool/small-cache-pool supported-random-distro$/{ubuntu_latest} workloads/rbd_support_module_recovery} 1
pass 7409816 2023-10-03 19:52:49 2023-10-03 19:53:31 2023-10-03 21:38:55 1:45:24 1:38:16 0:07:08 smithi main rhel 8.6 rbd:cli/{base/install clusters/{fixed-1 openstack} conf/{disable-pool-app} features/defaults msgr-failures/few objectstore/bluestore-bitmap pool/ec-data-pool supported-random-distro$/{rhel_8} workloads/rbd_support_module_recovery} 1
fail 7409817 2023-10-03 19:52:49 2023-10-03 19:53:31 2023-10-03 22:07:16 2:13:45 2:04:22 0:09:23 smithi main ubuntu 22.04 rbd:cli/{base/install clusters/{fixed-1 openstack} conf/{disable-pool-app} features/defaults msgr-failures/few objectstore/bluestore-low-osd-mem-target pool/replicated-data-pool supported-random-distro$/{ubuntu_latest} workloads/rbd_support_module_recovery} 1
Failure Reason:

Command failed (workunit test rbd/rbd_support_module_recovery.sh) on smithi007 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=7fa2fe7fd78eee9617aa81ca1e9d3def5c5ca231 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rbd/rbd_support_module_recovery.sh'

pass 7409818 2023-10-03 19:52:50 2023-10-03 19:53:31 2023-10-03 22:10:38 2:17:07 2:07:51 0:09:16 smithi main centos 9.stream rbd:cli/{base/install clusters/{fixed-1 openstack} conf/{disable-pool-app} features/defaults msgr-failures/few objectstore/bluestore-comp-zstd pool/ec-data-pool supported-random-distro$/{centos_latest} workloads/rbd_support_module_recovery} 1
pass 7409819 2023-10-03 19:52:51 2023-10-03 19:53:31 2023-10-03 21:34:55 1:41:24 1:32:02 0:09:22 smithi main centos 8.stream rbd:cli/{base/install clusters/{fixed-1 openstack} conf/{disable-pool-app} features/defaults msgr-failures/few objectstore/bluestore-comp-snappy pool/replicated-data-pool supported-random-distro$/{centos_8} workloads/rbd_support_module_recovery} 1
pass 7409820 2023-10-03 19:52:52 2023-10-03 19:53:32 2023-10-03 21:59:33 2:06:01 1:55:17 0:10:44 smithi main ubuntu 20.04 rbd:cli/{base/install clusters/{fixed-1 openstack} conf/{disable-pool-app} features/layering msgr-failures/few objectstore/bluestore-hybrid pool/ec-data-pool supported-random-distro$/{ubuntu_20.04} workloads/rbd_support_module_recovery} 1
pass 7409821 2023-10-03 19:52:52 2023-10-03 19:53:32 2023-10-03 21:53:23 1:59:51 1:50:18 0:09:33 smithi main ubuntu 22.04 rbd:cli/{base/install clusters/{fixed-1 openstack} conf/{disable-pool-app} features/layering msgr-failures/few objectstore/bluestore-comp-zlib pool/replicated-data-pool supported-random-distro$/{ubuntu_latest} workloads/rbd_support_module_recovery} 1
pass 7409822 2023-10-03 19:52:53 2023-10-03 19:53:32 2023-10-03 21:36:15 1:42:43 1:36:40 0:06:03 smithi main rhel 8.6 rbd:cli/{base/install clusters/{fixed-1 openstack} conf/{disable-pool-app} features/layering msgr-failures/few objectstore/bluestore-comp-lz4 pool/ec-data-pool supported-random-distro$/{rhel_8} workloads/rbd_support_module_recovery} 1
fail 7409823 2023-10-03 19:52:54 2023-10-03 19:53:33 2023-10-03 22:29:25 2:35:52 2:26:36 0:09:16 smithi main ubuntu 22.04 rbd:cli/{base/install clusters/{fixed-1 openstack} conf/{disable-pool-app} features/layering msgr-failures/few objectstore/bluestore-stupid pool/replicated-data-pool supported-random-distro$/{ubuntu_latest} workloads/rbd_support_module_recovery} 1
Failure Reason:

Command failed (workunit test rbd/rbd_support_module_recovery.sh) on smithi003 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=7fa2fe7fd78eee9617aa81ca1e9d3def5c5ca231 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rbd/rbd_support_module_recovery.sh'

fail 7409824 2023-10-03 19:52:55 2023-10-03 23:34:16 12287 smithi main centos 9.stream rbd:cli/{base/install clusters/{fixed-1 openstack} conf/{disable-pool-app} features/defaults msgr-failures/few objectstore/bluestore-bitmap pool/small-cache-pool supported-random-distro$/{centos_latest} workloads/rbd_support_module_recovery} 1
Failure Reason:

Command failed (workunit test rbd/rbd_support_module_recovery.sh) on smithi037 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=7fa2fe7fd78eee9617aa81ca1e9d3def5c5ca231 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rbd/rbd_support_module_recovery.sh'

pass 7409825 2023-10-03 19:52:55 2023-10-03 19:53:34 2023-10-03 21:35:47 1:42:13 1:31:57 0:10:16 smithi main centos 8.stream rbd:cli/{base/install clusters/{fixed-1 openstack} conf/{disable-pool-app} features/defaults msgr-failures/few objectstore/bluestore-low-osd-mem-target pool/none supported-random-distro$/{centos_8} workloads/rbd_support_module_recovery} 1
fail 7409826 2023-10-03 19:52:56 2023-10-03 19:53:34 2023-10-03 23:53:44 4:00:10 3:47:26 0:12:44 smithi main ubuntu 20.04 rbd:cli/{base/install clusters/{fixed-1 openstack} conf/{disable-pool-app} features/defaults msgr-failures/few objectstore/bluestore-comp-zstd pool/small-cache-pool supported-random-distro$/{ubuntu_20.04} workloads/rbd_support_module_recovery} 1
Failure Reason:

Command failed (workunit test rbd/rbd_support_module_recovery.sh) on smithi017 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=7fa2fe7fd78eee9617aa81ca1e9d3def5c5ca231 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rbd/rbd_support_module_recovery.sh'

pass 7409827 2023-10-03 19:52:57 2023-10-03 19:53:35 2023-10-03 21:54:10 2:00:35 1:50:36 0:09:59 smithi main ubuntu 22.04 rbd:cli/{base/install clusters/{fixed-1 openstack} conf/{disable-pool-app} features/defaults msgr-failures/few objectstore/bluestore-comp-snappy pool/none supported-random-distro$/{ubuntu_latest} workloads/rbd_support_module_recovery} 1
fail 7409828 2023-10-03 19:52:58 2023-10-03 19:53:35 2023-10-03 22:31:35 2:38:00 2:28:39 0:09:21 smithi main ubuntu 22.04 rbd:cli/{base/install clusters/{fixed-1 openstack} conf/{disable-pool-app} features/layering msgr-failures/few objectstore/bluestore-hybrid pool/small-cache-pool supported-random-distro$/{ubuntu_latest} workloads/rbd_support_module_recovery} 1
Failure Reason:

Command failed (workunit test rbd/rbd_support_module_recovery.sh) on smithi167 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=7fa2fe7fd78eee9617aa81ca1e9d3def5c5ca231 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rbd/rbd_support_module_recovery.sh'

pass 7409829 2023-10-03 19:52:58 2023-10-03 19:53:35 2023-10-03 21:36:18 1:42:43 1:33:59 0:08:44 smithi main centos 8.stream rbd:cli/{base/install clusters/{fixed-1 openstack} conf/{disable-pool-app} features/layering msgr-failures/few objectstore/bluestore-comp-zlib pool/none supported-random-distro$/{centos_8} workloads/rbd_support_module_recovery} 1
pass 7409830 2023-10-03 19:52:59 2023-10-03 19:53:36 2023-10-03 22:16:20 2:22:44 2:13:33 0:09:11 smithi main centos 9.stream rbd:cli/{base/install clusters/{fixed-1 openstack} conf/{disable-pool-app} features/layering msgr-failures/few objectstore/bluestore-comp-lz4 pool/small-cache-pool supported-random-distro$/{centos_latest} workloads/rbd_support_module_recovery} 1
fail 7409831 2023-10-03 19:53:00 2023-10-03 19:53:36 2023-10-03 23:44:01 3:50:25 3:38:53 0:11:32 smithi main ubuntu 20.04 rbd:cli/{base/install clusters/{fixed-1 openstack} conf/{disable-pool-app} features/layering msgr-failures/few objectstore/bluestore-stupid pool/none supported-random-distro$/{ubuntu_20.04} workloads/rbd_support_module_recovery} 1
Failure Reason:

Command failed (workunit test rbd/rbd_support_module_recovery.sh) on smithi094 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=7fa2fe7fd78eee9617aa81ca1e9d3def5c5ca231 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rbd/rbd_support_module_recovery.sh'