User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Pass | Fail | Dead |
---|---|---|---|---|---|---|---|---|---|---|---|
yuriw | 2023-05-21 14:33:19 | 2023-05-21 14:50:36 | 2023-05-22 03:21:31 | 12:30:55 | rbd | wip-yuri4-testing-2023-05-18-0754-quincy | smithi | 465d59c | 3 | 26 | 1 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
pass | 7281828 | 2023-05-21 14:33:56 | 2023-05-21 14:50:36 | 2023-05-21 15:57:26 | 1:06:50 | 0:57:56 | 0:08:54 | smithi | main | rhel | 8.4 | rbd/mirror-thrash/{base/install clients/mirror cluster/{2-node openstack} msgr-failures/few objectstore/bluestore-comp-lz4 policy/none rbd-mirror/four-per-cluster supported-random-distro$/{rhel_8} workloads/rbd-mirror-snapshot-stress-workunit-fast-diff} | 2 | |
fail | 7281829 | 2023-05-21 14:33:57 | 2023-05-21 14:52:37 | 2023-05-21 15:22:58 | 0:30:21 | 0:20:22 | 0:09:59 | smithi | main | ubuntu | 20.04 | rbd/cli_v1/{base/install clusters/{fixed-1 openstack} features/format-1 msgr-failures/few objectstore/bluestore-bitmap pool/none supported-random-distro$/{ubuntu_latest} workloads/rbd_cli_generic} | 1 | |
Failure Reason:
Command failed (workunit test rbd/cli_generic.sh) on smithi178 with status 22: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=465d59c7325658c86eb5d8820da2d8fc49b7a1cd TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rbd/cli_generic.sh' |
||||||||||||||
fail | 7281830 | 2023-05-21 14:33:58 | 2023-05-21 14:52:37 | 2023-05-21 15:49:55 | 0:57:18 | 0:47:52 | 0:09:26 | smithi | main | rhel | 8.4 | rbd/singleton-bluestore/{all/issue-20295 objectstore/bluestore-bitmap openstack supported-random-distro$/{rhel_8}} | 4 | |
Failure Reason:
Exiting scrub checking -- not all pgs scrubbed. |
||||||||||||||
fail | 7281831 | 2023-05-21 14:33:59 | 2023-05-21 14:53:31 | 2023-05-21 16:57:52 | 2:04:21 | 1:50:25 | 0:13:56 | smithi | main | centos | 8.stream | rbd/migration/{1-base/install 2-clusters/{fixed-3 openstack} 3-objectstore/filestore-xfs 4-supported-random-distro$/{centos_8} 5-pool/ec-data-pool 6-prepare/qcow2-http 7-io-workloads/qemu_xfstests 8-migrate-workloads/execute 9-cleanup/cleanup} | 3 | |
Failure Reason:
"2023-05-21T16:49:07.201233+0000 mon.a (mon.0) 272 : cluster [WRN] Health check failed: 5 osds down (OSD_DOWN)" in cluster log |
||||||||||||||
fail | 7281832 | 2023-05-21 14:34:00 | 2023-05-21 14:55:44 | 2023-05-21 15:50:43 | 0:54:59 | 0:43:51 | 0:11:08 | smithi | main | ubuntu | 20.04 | rbd/librbd/{cache/none clusters/{fixed-3 openstack} config/permit-partial-discard min-compat-client/octopus msgr-failures/few objectstore/filestore-xfs pool/ec-data-pool supported-random-distro$/{ubuntu_latest} workloads/rbd_fio} | 3 | |
Failure Reason:
"2023-05-21T15:45:06.852353+0000 mon.a (mon.0) 210 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)" in cluster log |
||||||||||||||
fail | 7281833 | 2023-05-21 14:34:01 | 2023-05-21 14:56:44 | 2023-05-21 18:47:35 | 3:50:51 | 3:38:24 | 0:12:27 | smithi | main | centos | 8.stream | rbd/cli/{base/install clusters/{fixed-1 openstack} features/layering msgr-failures/few objectstore/filestore-xfs pool/small-cache-pool supported-random-distro$/{centos_8} workloads/rbd_cli_generic} | 1 | |
Failure Reason:
Command failed (workunit test rbd/cli_generic.sh) on smithi027 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=465d59c7325658c86eb5d8820da2d8fc49b7a1cd TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rbd/cli_generic.sh' |
||||||||||||||
fail | 7281834 | 2023-05-21 14:34:02 | 2023-05-21 14:57:35 | 2023-05-21 16:46:48 | 1:49:13 | 1:39:40 | 0:09:33 | smithi | main | centos | 8.stream | rbd/migration/{1-base/install 2-clusters/{fixed-3 openstack} 3-objectstore/bluestore-bitmap 4-supported-random-distro$/{centos_8} 5-pool/none 6-prepare/qcow2-http 7-io-workloads/qemu_xfstests 8-migrate-workloads/execute 9-cleanup/cleanup} | 3 | |
Failure Reason:
Exiting scrub checking -- not all pgs scrubbed. |
||||||||||||||
fail | 7281835 | 2023-05-21 14:34:03 | 2023-05-21 14:57:35 | 2023-05-21 15:28:21 | 0:30:46 | 0:20:07 | 0:10:39 | smithi | main | ubuntu | 20.04 | rbd/cli_v1/{base/install clusters/{fixed-1 openstack} features/format-1 msgr-failures/few objectstore/bluestore-comp-snappy pool/none supported-random-distro$/{ubuntu_latest} workloads/rbd_cli_generic} | 1 | |
Failure Reason:
Command failed (workunit test rbd/cli_generic.sh) on smithi149 with status 22: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=465d59c7325658c86eb5d8820da2d8fc49b7a1cd TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rbd/cli_generic.sh' |
||||||||||||||
pass | 7281836 | 2023-05-21 14:34:04 | 2023-05-21 14:58:26 | 2023-05-21 15:45:38 | 0:47:12 | 0:40:34 | 0:06:38 | smithi | main | rhel | 8.4 | rbd/librbd/{cache/writearound clusters/{fixed-3 openstack} config/copy-on-read min-compat-client/octopus msgr-failures/few objectstore/bluestore-comp-lz4 pool/none supported-random-distro$/{rhel_8} workloads/c_api_tests_with_defaults} | 3 | |
fail | 7281837 | 2023-05-21 14:34:05 | 2023-05-21 14:58:27 | 2023-05-21 16:53:41 | 1:55:14 | 1:47:49 | 0:07:25 | smithi | main | rhel | 8.4 | rbd/migration/{1-base/install 2-clusters/{fixed-3 openstack} 3-objectstore/bluestore-comp-lz4 4-supported-random-distro$/{rhel_8} 5-pool/replicated-data-pool 6-prepare/raw-file 7-io-workloads/qemu_xfstests 8-migrate-workloads/execute 9-cleanup/cleanup} | 3 | |
Failure Reason:
"2023-05-21T16:48:50.790972+0000 mon.a (mon.0) 237 : cluster [WRN] Health check failed: 2 osds down (OSD_DOWN)" in cluster log |
||||||||||||||
fail | 7281838 | 2023-05-21 14:34:06 | 2023-05-21 14:58:27 | 2023-05-21 17:52:49 | 2:54:22 | 2:39:30 | 0:14:52 | smithi | main | centos | 8.stream | rbd/pwl-cache/home/{1-base/install 2-cluster/{fix-2 openstack} 3-supported-random-distro$/{centos_8} 4-cache-path 5-cache-mode/ssd 6-cache-size/8G 7-workloads/fio} | 2 | |
Failure Reason:
"2023-05-21T17:24:18.800731+0000 mon.a (mon.0) 117 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)" in cluster log |
||||||||||||||
fail | 7281839 | 2023-05-21 14:34:07 | 2023-05-21 15:03:29 | 2023-05-21 16:04:02 | 1:00:33 | 0:52:33 | 0:08:00 | smithi | main | rhel | 8.4 | rbd/librbd/{cache/writeback clusters/{fixed-3 openstack} config/none min-compat-client/default msgr-failures/few objectstore/bluestore-comp-snappy pool/replicated-data-pool supported-random-distro$/{rhel_8} workloads/c_api_tests_with_journaling} | 3 | |
Failure Reason:
Exiting scrub checking -- not all pgs scrubbed. |
||||||||||||||
fail | 7281840 | 2023-05-21 14:34:08 | 2023-05-21 15:04:29 | 2023-05-21 17:21:55 | 2:17:26 | 2:11:10 | 0:06:16 | smithi | main | rhel | 8.4 | rbd/migration/{1-base/install 2-clusters/{fixed-3 openstack} 3-objectstore/bluestore-comp-snappy 4-supported-random-distro$/{rhel_8} 5-pool/ec-data-pool 6-prepare/qcow2-file 7-io-workloads/qemu_xfstests 8-migrate-workloads/execute 9-cleanup/cleanup} | 3 | |
Failure Reason:
"2023-05-21T17:13:22.281748+0000 mon.a (mon.0) 251 : cluster [WRN] Health check failed: 2 osds down (OSD_DOWN)" in cluster log |
||||||||||||||
fail | 7281841 | 2023-05-21 14:34:09 | 2023-05-21 15:04:40 | 2023-05-21 15:37:50 | 0:33:10 | 0:27:29 | 0:05:41 | smithi | main | rhel | 8.4 | rbd/cli_v1/{base/install clusters/{fixed-1 openstack} features/format-1 msgr-failures/few objectstore/bluestore-comp-zstd pool/none supported-random-distro$/{rhel_8} workloads/rbd_cli_generic} | 1 | |
Failure Reason:
Command failed (workunit test rbd/cli_generic.sh) on smithi191 with status 22: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=465d59c7325658c86eb5d8820da2d8fc49b7a1cd TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rbd/cli_generic.sh' |
||||||||||||||
fail | 7281842 | 2023-05-21 14:34:10 | 2023-05-21 15:04:40 | 2023-05-21 15:51:03 | 0:46:23 | 0:35:49 | 0:10:34 | smithi | main | ubuntu | 20.04 | rbd/immutable-object-cache/{clusters/{fix-2 openstack} pool/ceph_and_immutable_object_cache supported-random-distro$/{ubuntu_latest} workloads/fio_on_immutable_object_cache} | 2 | |
Failure Reason:
Exiting scrub checking -- not all pgs scrubbed. |
||||||||||||||
fail | 7281843 | 2023-05-21 14:34:11 | 2023-05-21 15:05:01 | 2023-05-21 16:35:23 | 1:30:22 | 1:22:17 | 0:08:05 | smithi | main | rhel | 8.4 | rbd/pwl-cache/tmpfs/{1-base/install 2-cluster/{fix-2 openstack} 3-supported-random-distro$/{rhel_8} 4-cache-path 5-cache-mode/ssd 6-cache-size/5G 7-workloads/qemu_xfstests} | 2 | |
Failure Reason:
Command failed on smithi182 with status 1: 'test -f /home/ubuntu/cephtest/archive/qemu/client.0/success' |
||||||||||||||
fail | 7281844 | 2023-05-21 14:34:12 | 2023-05-21 15:06:21 | 2023-05-21 17:40:43 | 2:34:22 | 2:23:10 | 0:11:12 | smithi | main | ubuntu | 20.04 | rbd/migration/{1-base/install 2-clusters/{fixed-3 openstack} 3-objectstore/bluestore-comp-zlib 4-supported-random-distro$/{ubuntu_latest} 5-pool/none 6-prepare/qcow2-http 7-io-workloads/qemu_xfstests 8-migrate-workloads/execute 9-cleanup/cleanup} | 3 | |
Failure Reason:
"2023-05-21T17:35:03.296025+0000 mon.a (mon.0) 181 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)" in cluster log |
||||||||||||||
fail | 7281845 | 2023-05-21 14:34:12 | 2023-05-21 15:06:32 | 2023-05-21 15:36:33 | 0:30:01 | 0:19:27 | 0:10:34 | smithi | main | ubuntu | 20.04 | rbd/cli_v1/{base/install clusters/{fixed-1 openstack} features/format-1 msgr-failures/few objectstore/bluestore-low-osd-mem-target pool/none supported-random-distro$/{ubuntu_latest} workloads/rbd_cli_generic} | 1 | |
Failure Reason:
Command failed (workunit test rbd/cli_generic.sh) on smithi057 with status 22: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=465d59c7325658c86eb5d8820da2d8fc49b7a1cd TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rbd/cli_generic.sh' |
||||||||||||||
fail | 7281846 | 2023-05-21 14:34:13 | 2023-05-21 15:06:32 | 2023-05-21 15:54:16 | 0:47:44 | 0:37:32 | 0:10:12 | smithi | main | centos | 8.stream | rbd/singleton-bluestore/{all/issue-20295 objectstore/bluestore-comp-snappy openstack supported-random-distro$/{centos_8}} | 4 | |
Failure Reason:
Exiting scrub checking -- not all pgs scrubbed. |
||||||||||||||
fail | 7281847 | 2023-05-21 14:34:14 | 2023-05-21 15:07:13 | 2023-05-21 16:56:59 | 1:49:46 | 1:35:26 | 0:14:20 | smithi | main | centos | 8.stream | rbd/migration/{1-base/install 2-clusters/{fixed-3 openstack} 3-objectstore/bluestore-comp-zstd 4-supported-random-distro$/{centos_8} 5-pool/replicated-data-pool 6-prepare/raw-file 7-io-workloads/qemu_xfstests 8-migrate-workloads/execute 9-cleanup/cleanup} | 3 | |
Failure Reason:
Exiting scrub checking -- not all pgs scrubbed. |
||||||||||||||
dead | 7281848 | 2023-05-21 14:34:15 | 2023-05-21 15:09:04 | 2023-05-22 03:21:31 | 12:12:27 | smithi | main | ubuntu | 20.04 | rbd/cli/{base/install clusters/{fixed-1 openstack} features/defaults msgr-failures/few objectstore/bluestore-comp-zlib pool/small-cache-pool supported-random-distro$/{ubuntu_latest} workloads/rbd_cli_migration} | 1 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 7281849 | 2023-05-21 14:34:16 | 2023-05-21 15:09:04 | 2023-05-21 17:13:47 | 2:04:43 | 1:53:20 | 0:11:23 | smithi | main | centos | 8.stream | rbd/migration/{1-base/install 2-clusters/{fixed-3 openstack} 3-objectstore/bluestore-hybrid 4-supported-random-distro$/{centos_8} 5-pool/ec-data-pool 6-prepare/qcow2-file 7-io-workloads/qemu_xfstests 8-migrate-workloads/execute 9-cleanup/cleanup} | 3 | |
Failure Reason:
"2023-05-21T17:05:16.474591+0000 mon.a (mon.0) 268 : cluster [WRN] Health check failed: 3 osds down (OSD_DOWN)" in cluster log |
||||||||||||||
fail | 7281850 | 2023-05-21 14:34:17 | 2023-05-21 15:10:45 | 2023-05-21 15:46:19 | 0:35:34 | 0:27:49 | 0:07:45 | smithi | main | rhel | 8.4 | rbd/cli/{base/install clusters/{fixed-1 openstack} features/journaling msgr-failures/few objectstore/bluestore-comp-zstd pool/ec-data-pool supported-random-distro$/{rhel_8} workloads/rbd_cli_generic} | 1 | |
Failure Reason:
Command failed (workunit test rbd/cli_generic.sh) on smithi158 with status 22: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=465d59c7325658c86eb5d8820da2d8fc49b7a1cd TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rbd/cli_generic.sh' |
||||||||||||||
fail | 7281851 | 2023-05-21 14:34:18 | 2023-05-21 15:10:45 | 2023-05-21 15:42:28 | 0:31:43 | 0:21:01 | 0:10:42 | smithi | main | ubuntu | 20.04 | rbd/cli_v1/{base/install clusters/{fixed-1 openstack} features/format-1 msgr-failures/few objectstore/filestore-xfs pool/none supported-random-distro$/{ubuntu_latest} workloads/rbd_cli_generic} | 1 | |
Failure Reason:
Command failed (workunit test rbd/cli_generic.sh) on smithi110 with status 22: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=465d59c7325658c86eb5d8820da2d8fc49b7a1cd TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rbd/cli_generic.sh' |
||||||||||||||
fail | 7281852 | 2023-05-21 14:34:19 | 2023-05-21 15:10:46 | 2023-05-21 17:57:14 | 2:46:28 | 2:32:56 | 0:13:32 | smithi | main | ubuntu | 20.04 | rbd/migration/{1-base/install 2-clusters/{fixed-3 openstack} 3-objectstore/bluestore-low-osd-mem-target 4-supported-random-distro$/{ubuntu_latest} 5-pool/none 6-prepare/qcow2-http 7-io-workloads/qemu_xfstests 8-migrate-workloads/execute 9-cleanup/cleanup} | 3 | |
Failure Reason:
"2023-05-21T17:50:52.328588+0000 mon.a (mon.0) 191 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)" in cluster log |
||||||||||||||
fail | 7281853 | 2023-05-21 14:34:20 | 2023-05-21 15:11:16 | 2023-05-21 16:20:22 | 1:09:06 | 1:00:18 | 0:08:48 | smithi | main | rhel | 8.4 | rbd/librbd/{cache/writethrough clusters/{fixed-3 openstack} config/copy-on-read min-compat-client/octopus msgr-failures/few objectstore/bluestore-stupid pool/small-cache-pool supported-random-distro$/{rhel_8} workloads/rbd_fio} | 3 | |
Failure Reason:
"2023-05-21T16:15:46.319349+0000 mon.a (mon.0) 316 : cluster [WRN] Health check failed: 2 osds down (OSD_DOWN)" in cluster log |
||||||||||||||
fail | 7281854 | 2023-05-21 14:34:22 | 2023-05-21 15:13:07 | 2023-05-21 17:18:51 | 2:05:44 | 1:58:15 | 0:07:29 | smithi | main | rhel | 8.4 | rbd/migration/{1-base/install 2-clusters/{fixed-3 openstack} 3-objectstore/bluestore-stupid 4-supported-random-distro$/{rhel_8} 5-pool/replicated-data-pool 6-prepare/raw-file 7-io-workloads/qemu_xfstests 8-migrate-workloads/execute 9-cleanup/cleanup} | 3 | |
Failure Reason:
"2023-05-21T17:13:17.522621+0000 mon.a (mon.0) 184 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)" in cluster log |
||||||||||||||
fail | 7281855 | 2023-05-21 14:34:22 | 2023-05-21 15:13:48 | 2023-05-21 15:44:11 | 0:30:23 | 0:20:04 | 0:10:19 | smithi | main | ubuntu | 20.04 | rbd/cli_v1/{base/install clusters/{fixed-1 openstack} features/format-1 msgr-failures/few objectstore/bluestore-comp-lz4 pool/none supported-random-distro$/{ubuntu_latest} workloads/rbd_cli_generic} | 1 | |
Failure Reason:
Command failed (workunit test rbd/cli_generic.sh) on smithi170 with status 22: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=465d59c7325658c86eb5d8820da2d8fc49b7a1cd TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rbd/cli_generic.sh' |
||||||||||||||
pass | 7281856 | 2023-05-21 14:34:23 | 2023-05-21 15:14:28 | 2023-05-21 15:38:04 | 0:23:36 | 0:10:55 | 0:12:41 | smithi | main | centos | 8.stream | rbd/singleton/{all/read-flags-writethrough objectstore/bluestore-low-osd-mem-target openstack supported-random-distro$/{centos_8}} | 1 | |
fail | 7281857 | 2023-05-21 14:34:25 | 2023-05-21 15:17:09 | 2023-05-21 17:11:13 | 1:54:04 | 1:44:19 | 0:09:45 | smithi | main | centos | 8.stream | rbd/migration/{1-base/install 2-clusters/{fixed-3 openstack} 3-objectstore/filestore-xfs 4-supported-random-distro$/{centos_8} 5-pool/ec-data-pool 6-prepare/qcow2-file 7-io-workloads/qemu_xfstests 8-migrate-workloads/execute 9-cleanup/cleanup} | 3 | |
Failure Reason:
Exiting scrub checking -- not all pgs scrubbed. |