Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
fail 6454747 2021-10-21 12:57:30 2021-10-22 02:01:47 2021-10-22 02:34:25 0:32:38 0:19:52 0:12:46 smithi master centos 8.3 rbd/librbd/{cache/none clusters/{fixed-3 openstack} config/copy-on-read min-compat-client/default msgr-failures/few objectstore/bluestore-bitmap pool/ec-data-pool supported-random-distro$/{centos_8} workloads/c_api_tests} 3
Failure Reason:

"2021-10-22T02:23:00.512499+0000 mon.a (mon.0) 184 : cluster [WRN] Health check failed: Degraded data redundancy: 2/4 objects degraded (50.000%), 1 pg degraded (PG_DEGRADED)" in cluster log

fail 6454748 2021-10-21 12:57:31 2021-10-22 02:02:58 2021-10-22 02:23:14 0:20:16 0:11:29 0:08:47 smithi master rhel 8.3 rbd/librbd/{cache/writearound clusters/{fixed-3 openstack} config/none min-compat-client/octopus msgr-failures/few objectstore/bluestore-comp-lz4 pool/none supported-random-distro$/{rhel_8} workloads/c_api_tests_with_defaults} 3
Failure Reason:

Command failed on smithi148 with status 1: 'sudo yum -y install ceph-mgr-cephadm'

fail 6454749 2021-10-21 12:57:32 2021-10-22 02:03:48 2021-10-22 02:23:55 0:20:07 0:11:44 0:08:23 smithi master rhel 8.3 rbd/librbd/{cache/writeback clusters/{fixed-3 openstack} config/permit-partial-discard min-compat-client/default msgr-failures/few objectstore/bluestore-comp-snappy pool/replicated-data-pool supported-random-distro$/{rhel_8} workloads/c_api_tests_with_journaling} 3
Failure Reason:

Command failed on smithi186 with status 1: 'sudo yum -y install ceph-mgr-cephadm'

pass 6454750 2021-10-21 12:57:33 2021-10-22 02:04:29 2021-10-22 02:39:44 0:35:15 0:20:33 0:14:42 smithi master centos 8.3 rbd/librbd/{cache/writethrough clusters/{fixed-3 openstack} config/copy-on-read min-compat-client/octopus msgr-failures/few objectstore/bluestore-comp-zlib pool/small-cache-pool supported-random-distro$/{centos_8} workloads/fsx} 3
fail 6454751 2021-10-21 12:57:34 2021-10-22 02:06:19 2021-10-22 02:26:56 0:20:37 0:11:53 0:08:44 smithi master rhel 8.3 rbd/librbd/{cache/none clusters/{fixed-3 openstack} config/none min-compat-client/default msgr-failures/few objectstore/bluestore-comp-zstd pool/ec-data-pool supported-random-distro$/{rhel_8} workloads/python_api_tests} 3
Failure Reason:

Command failed on smithi132 with status 1: 'sudo yum -y install ceph-mgr-cephadm'

fail 6454752 2021-10-21 12:57:35 2021-10-22 02:07:30 2021-10-22 02:28:47 0:21:17 0:11:32 0:09:45 smithi master rhel 8.3 rbd/librbd/{cache/writearound clusters/{fixed-3 openstack} config/permit-partial-discard min-compat-client/octopus msgr-failures/few objectstore/bluestore-hybrid pool/none supported-random-distro$/{rhel_8} workloads/python_api_tests_with_defaults} 3
Failure Reason:

Command failed on smithi133 with status 1: 'sudo yum -y install ceph-mgr-cephadm'

pass 6454753 2021-10-21 12:57:36 2021-10-22 02:09:21 2021-10-22 02:41:49 0:32:28 0:19:48 0:12:40 smithi master ubuntu 20.04 rbd/librbd/{cache/writeback clusters/{fixed-3 openstack} config/copy-on-read min-compat-client/default msgr-failures/few objectstore/bluestore-low-osd-mem-target pool/replicated-data-pool supported-random-distro$/{ubuntu_latest} workloads/python_api_tests_with_journaling} 3
pass 6454754 2021-10-21 12:57:37 2021-10-22 02:09:51 2021-10-22 02:51:40 0:41:49 0:29:21 0:12:28 smithi master centos 8.3 rbd/librbd/{cache/writethrough clusters/{fixed-3 openstack} config/none min-compat-client/octopus msgr-failures/few objectstore/bluestore-stupid pool/small-cache-pool supported-random-distro$/{centos_8} workloads/rbd_fio} 3
fail 6454755 2021-10-21 12:57:38 2021-10-22 02:10:11 2021-10-22 02:50:17 0:40:06 0:24:58 0:15:08 smithi master ubuntu 20.04 rbd/librbd/{cache/none clusters/{fixed-3 openstack} config/permit-partial-discard min-compat-client/default msgr-failures/few objectstore/filestore-xfs pool/ec-data-pool supported-random-distro$/{ubuntu_latest} workloads/c_api_tests} 3
Failure Reason:

Command failed (workunit test rbd/test_librbd.sh) on smithi118 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=e216c1ee110397eb48ef026c794021917db737f2 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 RBD_FEATURES=1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rbd/test_librbd.sh'

fail 6454756 2021-10-21 12:57:39 2021-10-22 02:11:32 2021-10-22 02:31:11 0:19:39 0:11:43 0:07:56 smithi master rhel 8.3 rbd/librbd/{cache/writearound clusters/{fixed-3 openstack} config/copy-on-read min-compat-client/octopus msgr-failures/few objectstore/bluestore-bitmap pool/none supported-random-distro$/{rhel_8} workloads/c_api_tests_with_defaults} 3
Failure Reason:

Command failed on smithi139 with status 1: 'sudo yum -y install ceph-mgr-cephadm'

fail 6454757 2021-10-21 12:57:39 2021-10-22 02:11:32 2021-10-22 03:00:14 0:48:42 0:34:50 0:13:52 smithi master ubuntu 20.04 rbd/librbd/{cache/writeback clusters/{fixed-3 openstack} config/none min-compat-client/default msgr-failures/few objectstore/bluestore-comp-lz4 pool/replicated-data-pool supported-random-distro$/{ubuntu_latest} workloads/c_api_tests_with_journaling} 3
Failure Reason:

Command failed (workunit test rbd/test_librbd.sh) on smithi171 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=e216c1ee110397eb48ef026c794021917db737f2 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 RBD_FEATURES=125 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rbd/test_librbd.sh'

pass 6454758 2021-10-21 12:57:40 2021-10-22 02:11:43 2021-10-22 02:41:46 0:30:03 0:17:41 0:12:22 smithi master centos 8.3 rbd/librbd/{cache/writethrough clusters/{fixed-3 openstack} config/permit-partial-discard min-compat-client/octopus msgr-failures/few objectstore/bluestore-comp-snappy pool/small-cache-pool supported-random-distro$/{centos_8} workloads/fsx} 3
fail 6454759 2021-10-21 12:57:41 2021-10-22 02:12:33 2021-10-22 02:31:59 0:19:26 0:11:31 0:07:55 smithi master rhel 8.3 rbd/librbd/{cache/none clusters/{fixed-3 openstack} config/copy-on-read min-compat-client/default msgr-failures/few objectstore/bluestore-comp-zlib pool/ec-data-pool supported-random-distro$/{rhel_8} workloads/python_api_tests} 3
Failure Reason:

Command failed on smithi103 with status 1: 'sudo yum -y install ceph-mgr-cephadm'

pass 6454760 2021-10-21 12:57:42 2021-10-22 02:12:44 2021-10-22 02:46:44 0:34:00 0:21:22 0:12:38 smithi master centos 8.3 rbd/librbd/{cache/writearound clusters/{fixed-3 openstack} config/none min-compat-client/octopus msgr-failures/few objectstore/bluestore-comp-zstd pool/none supported-random-distro$/{centos_8} workloads/python_api_tests_with_defaults} 3
pass 6454761 2021-10-21 12:57:43 2021-10-22 02:13:24 2021-10-22 02:48:52 0:35:28 0:19:58 0:15:30 smithi master ubuntu 20.04 rbd/librbd/{cache/writeback clusters/{fixed-3 openstack} config/permit-partial-discard min-compat-client/default msgr-failures/few objectstore/bluestore-hybrid pool/replicated-data-pool supported-random-distro$/{ubuntu_latest} workloads/python_api_tests_with_journaling} 3
pass 6454762 2021-10-21 12:57:44 2021-10-22 02:14:55 2021-10-22 02:56:49 0:41:54 0:29:43 0:12:11 smithi master centos 8.3 rbd/librbd/{cache/writethrough clusters/{fixed-3 openstack} config/copy-on-read min-compat-client/octopus msgr-failures/few objectstore/bluestore-low-osd-mem-target pool/small-cache-pool supported-random-distro$/{centos_8} workloads/rbd_fio} 3
fail 6454763 2021-10-21 12:57:45 2021-10-22 02:15:05 2021-10-22 02:35:37 0:20:32 0:11:41 0:08:51 smithi master rhel 8.3 rbd/librbd/{cache/none clusters/{fixed-3 openstack} config/none min-compat-client/default msgr-failures/few objectstore/bluestore-stupid pool/ec-data-pool supported-random-distro$/{rhel_8} workloads/c_api_tests} 3
Failure Reason:

Command failed on smithi187 with status 1: 'sudo yum -y install ceph-mgr-cephadm'

fail 6454764 2021-10-21 12:57:46 2021-10-22 02:16:16 2021-10-22 02:36:17 0:20:01 0:11:43 0:08:18 smithi master rhel 8.3 rbd/librbd/{cache/writearound clusters/{fixed-3 openstack} config/permit-partial-discard min-compat-client/octopus msgr-failures/few objectstore/filestore-xfs pool/none supported-random-distro$/{rhel_8} workloads/c_api_tests_with_defaults} 3
Failure Reason:

Command failed on smithi117 with status 1: 'sudo yum -y install ceph-mgr-cephadm'

fail 6454765 2021-10-21 12:57:47 2021-10-22 02:17:06 2021-10-22 03:07:11 0:50:05 0:34:53 0:15:12 smithi master ubuntu 20.04 rbd/librbd/{cache/writeback clusters/{fixed-3 openstack} config/copy-on-read min-compat-client/default msgr-failures/few objectstore/bluestore-bitmap pool/replicated-data-pool supported-random-distro$/{ubuntu_latest} workloads/c_api_tests_with_journaling} 3
Failure Reason:

Command failed (workunit test rbd/test_librbd.sh) on smithi198 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=e216c1ee110397eb48ef026c794021917db737f2 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 RBD_FEATURES=125 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rbd/test_librbd.sh'

pass 6454766 2021-10-21 12:57:48 2021-10-22 02:19:17 2021-10-22 02:51:45 0:32:28 0:18:15 0:14:13 smithi master ubuntu 20.04 rbd/librbd/{cache/writethrough clusters/{fixed-3 openstack} config/none min-compat-client/octopus msgr-failures/few objectstore/bluestore-comp-lz4 pool/small-cache-pool supported-random-distro$/{ubuntu_latest} workloads/fsx} 3
pass 6454767 2021-10-21 12:57:49 2021-10-22 02:19:17 2021-10-22 02:55:17 0:36:00 0:22:00 0:14:00 smithi master centos 8.3 rbd/librbd/{cache/none clusters/{fixed-3 openstack} config/permit-partial-discard min-compat-client/default msgr-failures/few objectstore/bluestore-comp-snappy pool/ec-data-pool supported-random-distro$/{centos_8} workloads/python_api_tests} 3
pass 6454768 2021-10-21 12:57:50 2021-10-22 02:19:38 2021-10-22 02:55:15 0:35:37 0:19:56 0:15:41 smithi master ubuntu 20.04 rbd/librbd/{cache/writearound clusters/{fixed-3 openstack} config/copy-on-read min-compat-client/octopus msgr-failures/few objectstore/bluestore-comp-zlib pool/none supported-random-distro$/{ubuntu_latest} workloads/python_api_tests_with_defaults} 3
pass 6454769 2021-10-21 12:57:51 2021-10-22 02:20:38 2021-10-22 02:55:42 0:35:04 0:19:58 0:15:06 smithi master ubuntu 20.04 rbd/librbd/{cache/writeback clusters/{fixed-3 openstack} config/none min-compat-client/default msgr-failures/few objectstore/bluestore-comp-zstd pool/replicated-data-pool supported-random-distro$/{ubuntu_latest} workloads/python_api_tests_with_journaling} 3
pass 6454770 2021-10-21 12:57:52 2021-10-22 02:22:59 2021-10-22 03:03:37 0:40:38 0:27:33 0:13:05 smithi master ubuntu 20.04 rbd/librbd/{cache/writethrough clusters/{fixed-3 openstack} config/permit-partial-discard min-compat-client/octopus msgr-failures/few objectstore/bluestore-hybrid pool/small-cache-pool supported-random-distro$/{ubuntu_latest} workloads/rbd_fio} 3
fail 6454771 2021-10-21 12:57:53 2021-10-22 02:23:19 2021-10-22 02:43:17 0:19:58 0:11:45 0:08:13 smithi master rhel 8.3 rbd/librbd/{cache/none clusters/{fixed-3 openstack} config/copy-on-read min-compat-client/default msgr-failures/few objectstore/bluestore-low-osd-mem-target pool/ec-data-pool supported-random-distro$/{rhel_8} workloads/c_api_tests} 3
Failure Reason:

Command failed on smithi116 with status 1: 'sudo yum -y install ceph-mgr-cephadm'

fail 6454772 2021-10-21 12:57:54 2021-10-22 02:24:00 2021-10-22 03:06:21 0:42:21 0:26:49 0:15:32 smithi master centos 8.3 rbd/librbd/{cache/writearound clusters/{fixed-3 openstack} config/none min-compat-client/octopus msgr-failures/few objectstore/bluestore-stupid pool/none supported-random-distro$/{centos_8} workloads/c_api_tests_with_defaults} 3
Failure Reason:

"2021-10-22T02:49:35.006416+0000 mon.a (mon.0) 217 : cluster [WRN] pool 'test-librbd-smithi197-27281-9' is full (reached quota's max_bytes: 10 MiB)" in cluster log

fail 6454773 2021-10-21 12:57:55 2021-10-22 02:27:01 2021-10-22 02:46:03 0:19:02 0:11:45 0:07:17 smithi master rhel 8.3 rbd/librbd/{cache/writeback clusters/{fixed-3 openstack} config/permit-partial-discard min-compat-client/default msgr-failures/few objectstore/filestore-xfs pool/replicated-data-pool supported-random-distro$/{rhel_8} workloads/c_api_tests_with_journaling} 3
Failure Reason:

Command failed on smithi132 with status 1: 'sudo yum -y install ceph-mgr-cephadm'

pass 6454774 2021-10-21 12:57:56 2021-10-22 02:27:01 2021-10-22 03:00:11 0:33:10 0:19:21 0:13:49 smithi master centos 8.3 rbd/librbd/{cache/writethrough clusters/{fixed-3 openstack} config/copy-on-read min-compat-client/octopus msgr-failures/few objectstore/bluestore-bitmap pool/small-cache-pool supported-random-distro$/{centos_8} workloads/fsx} 3
fail 6454775 2021-10-21 12:57:57 2021-10-22 02:28:52 2021-10-22 02:48:43 0:19:51 0:11:24 0:08:27 smithi master rhel 8.3 rbd/librbd/{cache/none clusters/{fixed-3 openstack} config/none min-compat-client/default msgr-failures/few objectstore/bluestore-comp-lz4 pool/ec-data-pool supported-random-distro$/{rhel_8} workloads/python_api_tests} 3
Failure Reason:

Command failed on smithi087 with status 1: 'sudo yum -y install ceph-mgr-cephadm'

pass 6454776 2021-10-21 12:57:58 2021-10-22 02:29:02 2021-10-22 03:03:28 0:34:26 0:19:52 0:14:34 smithi master ubuntu 20.04 rbd/librbd/{cache/writearound clusters/{fixed-3 openstack} config/permit-partial-discard min-compat-client/octopus msgr-failures/few objectstore/bluestore-comp-snappy pool/none supported-random-distro$/{ubuntu_latest} workloads/python_api_tests_with_defaults} 3
pass 6454777 2021-10-21 12:57:59 2021-10-22 02:29:12 2021-10-22 03:06:19 0:37:07 0:21:29 0:15:38 smithi master centos 8.3 rbd/librbd/{cache/writeback clusters/{fixed-3 openstack} config/copy-on-read min-compat-client/default msgr-failures/few objectstore/bluestore-comp-zlib pool/replicated-data-pool supported-random-distro$/{centos_8} workloads/python_api_tests_with_journaling} 3
pass 6454778 2021-10-21 12:58:00 2021-10-22 02:30:03 2021-10-22 03:13:45 0:43:42 0:29:49 0:13:53 smithi master centos 8.3 rbd/librbd/{cache/writethrough clusters/{fixed-3 openstack} config/none min-compat-client/octopus msgr-failures/few objectstore/bluestore-comp-zstd pool/small-cache-pool supported-random-distro$/{centos_8} workloads/rbd_fio} 3
fail 6454779 2021-10-21 12:58:01 2021-10-22 02:30:13 2021-10-22 03:08:36 0:38:23 0:26:18 0:12:05 smithi master ubuntu 20.04 rbd/librbd/{cache/none clusters/{fixed-3 openstack} config/permit-partial-discard min-compat-client/default msgr-failures/few objectstore/bluestore-hybrid pool/ec-data-pool supported-random-distro$/{ubuntu_latest} workloads/c_api_tests} 3
Failure Reason:

Command failed (workunit test rbd/test_librbd.sh) on smithi194 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=e216c1ee110397eb48ef026c794021917db737f2 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 RBD_FEATURES=1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rbd/test_librbd.sh'

fail 6454780 2021-10-21 12:58:02 2021-10-22 02:30:54 2021-10-22 02:51:04 0:20:10 0:11:37 0:08:33 smithi master rhel 8.3 rbd/librbd/{cache/writearound clusters/{fixed-3 openstack} config/copy-on-read min-compat-client/octopus msgr-failures/few objectstore/bluestore-low-osd-mem-target pool/none supported-random-distro$/{rhel_8} workloads/c_api_tests_with_defaults} 3
Failure Reason:

Command failed on smithi139 with status 1: 'sudo yum -y install ceph-mgr-cephadm'

fail 6454781 2021-10-21 12:58:03 2021-10-22 02:31:14 2021-10-22 03:19:25 0:48:11 0:34:28 0:13:43 smithi master ubuntu 20.04 rbd/librbd/{cache/writeback clusters/{fixed-3 openstack} config/none min-compat-client/default msgr-failures/few objectstore/bluestore-stupid pool/replicated-data-pool supported-random-distro$/{ubuntu_latest} workloads/c_api_tests_with_journaling} 3
Failure Reason:

Command failed (workunit test rbd/test_librbd.sh) on smithi192 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=e216c1ee110397eb48ef026c794021917db737f2 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 RBD_FEATURES=125 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rbd/test_librbd.sh'

fail 6454782 2021-10-21 12:58:04 2021-10-22 02:31:14 2021-10-22 03:02:07 0:30:53 0:17:25 0:13:28 smithi master ubuntu 20.04 rbd/librbd/{cache/writethrough clusters/{fixed-3 openstack} config/permit-partial-discard min-compat-client/octopus msgr-failures/few objectstore/filestore-xfs pool/small-cache-pool supported-random-distro$/{ubuntu_latest} workloads/fsx} 3
Failure Reason:

"2021-10-22T02:49:57.455005+0000 mon.a (mon.0) 117 : cluster [WRN] Health check failed: Degraded data redundancy: 2/4 objects degraded (50.000%), 1 pg degraded (PG_DEGRADED)" in cluster log

pass 6454783 2021-10-21 12:58:05 2021-10-22 02:32:05 2021-10-22 03:05:58 0:33:53 0:19:22 0:14:31 smithi master ubuntu 20.04 rbd/librbd/{cache/none clusters/{fixed-3 openstack} config/copy-on-read min-compat-client/default msgr-failures/few objectstore/bluestore-bitmap pool/ec-data-pool supported-random-distro$/{ubuntu_latest} workloads/python_api_tests} 3
fail 6454784 2021-10-21 12:58:06 2021-10-22 02:34:15 2021-10-22 02:53:38 0:19:23 0:11:31 0:07:52 smithi master rhel 8.3 rbd/librbd/{cache/writearound clusters/{fixed-3 openstack} config/none min-compat-client/octopus msgr-failures/few objectstore/bluestore-comp-lz4 pool/none supported-random-distro$/{rhel_8} workloads/python_api_tests_with_defaults} 3
Failure Reason:

Command failed on smithi093 with status 1: 'sudo yum -y install ceph-mgr-cephadm'

pass 6454785 2021-10-21 12:58:07 2021-10-22 02:34:36 2021-10-22 03:08:50 0:34:14 0:21:39 0:12:35 smithi master centos 8.3 rbd/librbd/{cache/writeback clusters/{fixed-3 openstack} config/permit-partial-discard min-compat-client/default msgr-failures/few objectstore/bluestore-comp-snappy pool/replicated-data-pool supported-random-distro$/{centos_8} workloads/python_api_tests_with_journaling} 3
pass 6454786 2021-10-21 12:58:08 2021-10-22 02:35:46 2021-10-22 03:17:35 0:41:49 0:29:09 0:12:40 smithi master centos 8.3 rbd/librbd/{cache/writethrough clusters/{fixed-3 openstack} config/copy-on-read min-compat-client/octopus msgr-failures/few objectstore/bluestore-comp-zlib pool/small-cache-pool supported-random-distro$/{centos_8} workloads/rbd_fio} 3
fail 6454787 2021-10-21 12:58:09 2021-10-22 02:36:07 2021-10-22 02:55:39 0:19:32 0:11:43 0:07:49 smithi master rhel 8.3 rbd/librbd/{cache/none clusters/{fixed-3 openstack} config/none min-compat-client/default msgr-failures/few objectstore/bluestore-comp-zstd pool/ec-data-pool supported-random-distro$/{rhel_8} workloads/c_api_tests} 3
Failure Reason:

Command failed on smithi099 with status 1: 'sudo yum -y install ceph-mgr-cephadm'

fail 6454788 2021-10-21 12:58:10 2021-10-22 02:36:27 2021-10-22 02:56:20 0:19:53 0:11:47 0:08:06 smithi master rhel 8.3 rbd/librbd/{cache/writearound clusters/{fixed-3 openstack} config/permit-partial-discard min-compat-client/octopus msgr-failures/few objectstore/bluestore-hybrid pool/none supported-random-distro$/{rhel_8} workloads/c_api_tests_with_defaults} 3
Failure Reason:

Command failed on smithi092 with status 1: 'sudo yum -y install ceph-mgr-cephadm'

fail 6454789 2021-10-21 12:58:11 2021-10-22 02:37:08 2021-10-22 02:56:49 0:19:41 0:11:44 0:07:57 smithi master rhel 8.3 rbd/librbd/{cache/writeback clusters/{fixed-3 openstack} config/copy-on-read min-compat-client/default msgr-failures/few objectstore/bluestore-low-osd-mem-target pool/replicated-data-pool supported-random-distro$/{rhel_8} workloads/c_api_tests_with_journaling} 3
Failure Reason:

Command failed on smithi158 with status 1: 'sudo yum -y install ceph-mgr-cephadm'

pass 6454790 2021-10-21 12:58:12 2021-10-22 02:37:08 2021-10-22 03:09:16 0:32:08 0:18:22 0:13:46 smithi master centos 8.3 rbd/librbd/{cache/writethrough clusters/{fixed-3 openstack} config/none min-compat-client/octopus msgr-failures/few objectstore/bluestore-stupid pool/small-cache-pool supported-random-distro$/{centos_8} workloads/fsx} 3
fail 6454791 2021-10-21 12:58:13 2021-10-22 02:39:49 2021-10-22 02:59:26 0:19:37 0:11:40 0:07:57 smithi master rhel 8.3 rbd/librbd/{cache/none clusters/{fixed-3 openstack} config/permit-partial-discard min-compat-client/default msgr-failures/few objectstore/filestore-xfs pool/ec-data-pool supported-random-distro$/{rhel_8} workloads/python_api_tests} 3
Failure Reason:

Command failed on smithi143 with status 1: 'sudo yum -y install ceph-mgr-cephadm'

pass 6454792 2021-10-21 12:58:14 2021-10-22 02:39:59 2021-10-22 03:12:51 0:32:52 0:19:31 0:13:21 smithi master ubuntu 20.04 rbd/librbd/{cache/writearound clusters/{fixed-3 openstack} config/copy-on-read min-compat-client/octopus msgr-failures/few objectstore/bluestore-bitmap pool/none supported-random-distro$/{ubuntu_latest} workloads/python_api_tests_with_defaults} 3
pass 6454793 2021-10-21 12:58:15 2021-10-22 02:40:50 2021-10-22 03:15:06 0:34:16 0:21:34 0:12:42 smithi master centos 8.3 rbd/librbd/{cache/writeback clusters/{fixed-3 openstack} config/none min-compat-client/default msgr-failures/few objectstore/bluestore-comp-lz4 pool/replicated-data-pool supported-random-distro$/{centos_8} workloads/python_api_tests_with_journaling} 3
pass 6454794 2021-10-21 12:58:16 2021-10-22 02:41:50 2021-10-22 03:23:04 0:41:14 0:29:38 0:11:36 smithi master centos 8.3 rbd/librbd/{cache/writethrough clusters/{fixed-3 openstack} config/permit-partial-discard min-compat-client/octopus msgr-failures/few objectstore/bluestore-comp-snappy pool/small-cache-pool supported-random-distro$/{centos_8} workloads/rbd_fio} 3
fail 6454795 2021-10-21 12:58:17 2021-10-22 02:41:51 2021-10-22 03:22:38 0:40:47 0:26:37 0:14:10 smithi master ubuntu 20.04 rbd/librbd/{cache/none clusters/{fixed-3 openstack} config/copy-on-read min-compat-client/default msgr-failures/few objectstore/bluestore-comp-zlib pool/ec-data-pool supported-random-distro$/{ubuntu_latest} workloads/c_api_tests} 3
Failure Reason:

Command failed (workunit test rbd/test_librbd.sh) on smithi178 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=e216c1ee110397eb48ef026c794021917db737f2 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 RBD_FEATURES=1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rbd/test_librbd.sh'

fail 6454796 2021-10-21 12:58:18 2021-10-22 02:42:01 2021-10-22 03:22:41 0:40:40 0:27:46 0:12:54 smithi master centos 8.3 rbd/librbd/{cache/writearound clusters/{fixed-3 openstack} config/none min-compat-client/octopus msgr-failures/few objectstore/bluestore-comp-zstd pool/none supported-random-distro$/{centos_8} workloads/c_api_tests_with_defaults} 3
Failure Reason:

"2021-10-22T03:05:49.187267+0000 mon.a (mon.0) 223 : cluster [WRN] pool 'test-librbd-smithi186-27255-9' is full (reached quota's max_bytes: 10 MiB)" in cluster log