Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
pass 6443651 2021-10-14 22:47:28 2021-10-14 22:48:10 2021-10-15 00:02:29 1:14:19 1:05:03 0:09:16 smithi master centos 8.3 rbd/valgrind/{base/install centos_latest clusters/{fixed-1 openstack} objectstore/bluestore-comp-snappy validator/memcheck workloads/c_api_tests_with_journaling} 1
pass 6443652 2021-10-14 22:47:29 2021-10-14 22:48:13 2021-10-14 23:26:17 0:38:04 0:26:59 0:11:05 smithi master centos 8.stream rbd/immutable-object-cache/{clusters/{fix-2 openstack} pool/ceph_and_immutable_object_cache supported-random-distro$/{centos_8.stream} workloads/c_api_tests_with_defaults} 2
fail 6443653 2021-10-14 22:47:30 2021-10-14 22:48:12 2021-10-14 23:44:14 0:56:02 0:44:49 0:11:13 smithi master centos 8.stream rbd/persistent-writeback-cache/{1-base/install 2-cluster/{fix-2 openstack} 3-supported-random-distro$/{centos_8.stream} 4-pool/cache 5-cache-mode/rwl 6-workloads/c_api_tests_with_defaults} 2
Failure Reason:

Command failed (workunit test rbd/test_librbd.sh) on smithi161 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=73892f6ed4ae1c5f5f9586e11662445f4074a392 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 RBD_FEATURES=61 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rbd/test_librbd.sh'

pass 6443654 2021-10-14 22:47:31 2021-10-14 22:48:14 2021-10-14 23:32:17 0:44:03 0:35:48 0:08:15 smithi master rhel 8.4 rbd/librbd/{cache/none clusters/{fixed-3 openstack} config/none min-compat-client/default msgr-failures/few objectstore/bluestore-hybrid pool/none supported-random-distro$/{rhel_8} workloads/c_api_tests} 3
fail 6443655 2021-10-14 22:47:32 2021-10-14 22:48:12 2021-10-14 23:22:13 0:34:01 0:21:37 0:12:24 smithi master centos 8.stream rbd/librbd/{cache/writearound clusters/{fixed-3 openstack} config/permit-partial-discard min-compat-client/octopus msgr-failures/few objectstore/bluestore-low-osd-mem-target pool/replicated-data-pool supported-random-distro$/{centos_8.stream} workloads/c_api_tests_with_defaults} 3
Failure Reason:

"2021-10-14T23:11:10.145438+0000 mon.a (mon.0) 115 : cluster [WRN] Health check failed: Degraded data redundancy: 2/4 objects degraded (50.000%), 1 pg degraded (PG_DEGRADED)" in cluster log

pass 6443656 2021-10-14 22:47:33 2021-10-14 22:48:13 2021-10-14 23:38:14 0:50:01 0:37:06 0:12:55 smithi master centos 8.stream rbd/librbd/{cache/writeback clusters/{fixed-3 openstack} config/copy-on-read min-compat-client/default msgr-failures/few objectstore/bluestore-stupid pool/small-cache-pool supported-random-distro$/{centos_8.stream} workloads/c_api_tests_with_journaling} 3
pass 6443657 2021-10-14 22:47:33 2021-10-14 22:48:12 2021-10-14 23:22:13 0:34:01 0:25:43 0:08:18 smithi master ubuntu 20.04 rbd/librbd/{cache/none clusters/{fixed-3 openstack} config/copy-on-read min-compat-client/default msgr-failures/few objectstore/bluestore-comp-zstd pool/none supported-random-distro$/{ubuntu_latest} workloads/c_api_tests} 3
fail 6443658 2021-10-14 22:47:34 2021-10-14 22:48:15 2021-10-14 23:26:14 0:37:59 0:28:08 0:09:51 smithi master ubuntu 20.04 rbd/librbd/{cache/writearound clusters/{fixed-3 openstack} config/none min-compat-client/octopus msgr-failures/few objectstore/bluestore-hybrid pool/replicated-data-pool supported-random-distro$/{ubuntu_latest} workloads/c_api_tests_with_defaults} 3
Failure Reason:

Command failed (workunit test rbd/test_librbd.sh) on smithi155 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=73892f6ed4ae1c5f5f9586e11662445f4074a392 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 RBD_FEATURES=61 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rbd/test_librbd.sh'

pass 6443659 2021-10-14 22:47:35 2021-10-14 22:48:11 2021-10-14 23:44:14 0:56:03 0:47:55 0:08:08 smithi master rhel 8.4 rbd/librbd/{cache/writeback clusters/{fixed-3 openstack} config/permit-partial-discard min-compat-client/default msgr-failures/few objectstore/bluestore-low-osd-mem-target pool/small-cache-pool supported-random-distro$/{rhel_8} workloads/c_api_tests_with_journaling} 3
fail 6443660 2021-10-14 22:47:36 2021-10-14 22:48:14 2021-10-14 23:24:15 0:36:01 0:26:20 0:09:41 smithi master ubuntu 20.04 rbd/librbd/{cache/none clusters/{fixed-3 openstack} config/permit-partial-discard min-compat-client/default msgr-failures/few objectstore/bluestore-comp-zlib pool/none supported-random-distro$/{ubuntu_latest} workloads/c_api_tests} 3
Failure Reason:

"2021-10-14T23:16:59.778731+0000 mon.a (mon.0) 930 : cluster [WRN] Health check failed: Degraded data redundancy: 2/584 objects degraded (0.342%), 1 pg degraded (PG_DEGRADED)" in cluster log

fail 6443661 2021-10-14 22:47:37 2021-10-14 22:48:13 2021-10-14 23:34:17 0:46:04 0:30:07 0:15:57 smithi master centos 8.3 rbd/librbd/{cache/writearound clusters/{fixed-3 openstack} config/copy-on-read min-compat-client/octopus msgr-failures/few objectstore/bluestore-comp-zstd pool/replicated-data-pool supported-random-distro$/{centos_8} workloads/c_api_tests_with_defaults} 3
Failure Reason:

Command failed (workunit test rbd/test_librbd.sh) on smithi187 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=73892f6ed4ae1c5f5f9586e11662445f4074a392 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 RBD_FEATURES=61 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rbd/test_librbd.sh'

pass 6443662 2021-10-14 22:47:38 2021-10-14 22:48:13 2021-10-14 23:34:19 0:46:06 0:32:56 0:13:10 smithi master centos 8.stream rbd/librbd/{cache/writeback clusters/{fixed-3 openstack} config/none min-compat-client/default msgr-failures/few objectstore/bluestore-hybrid pool/small-cache-pool supported-random-distro$/{centos_8.stream} workloads/c_api_tests_with_journaling} 3
fail 6443663 2021-10-14 22:47:39 2021-10-14 22:48:12 2021-10-14 23:34:15 0:46:03 0:37:38 0:08:25 smithi master rhel 8.4 rbd/librbd/{cache/none clusters/{fixed-3 openstack} config/none min-compat-client/default msgr-failures/few objectstore/bluestore-comp-snappy pool/none supported-random-distro$/{rhel_8} workloads/c_api_tests} 3
Failure Reason:

"2021-10-14T23:27:08.969482+0000 mon.a (mon.0) 772 : cluster [WRN] Health check failed: Degraded data redundancy: 2/454 objects degraded (0.441%), 1 pg degraded (PG_DEGRADED)" in cluster log

fail 6443664 2021-10-14 22:47:40 2021-10-14 22:48:15 2021-10-14 23:26:17 0:38:02 0:29:30 0:08:32 smithi master rhel 8.4 rbd/librbd/{cache/writearound clusters/{fixed-3 openstack} config/permit-partial-discard min-compat-client/octopus msgr-failures/few objectstore/bluestore-comp-zlib pool/replicated-data-pool supported-random-distro$/{rhel_8} workloads/c_api_tests_with_defaults} 3
Failure Reason:

reached maximum tries (90) after waiting for 540 seconds

pass 6443665 2021-10-14 22:47:41 2021-10-14 22:48:12 2021-10-14 23:32:15 0:44:03 0:35:32 0:08:31 smithi master ubuntu 20.04 rbd/librbd/{cache/writeback clusters/{fixed-3 openstack} config/copy-on-read min-compat-client/default msgr-failures/few objectstore/bluestore-comp-zstd pool/small-cache-pool supported-random-distro$/{ubuntu_latest} workloads/c_api_tests_with_journaling} 3
pass 6443666 2021-10-14 22:47:42 2021-10-14 22:48:15 2021-10-14 23:28:15 0:40:00 0:27:22 0:12:38 smithi master centos 8.3 rbd/persistent-writeback-cache/{1-base/install 2-cluster/{fix-2 openstack} 3-supported-random-distro$/{centos_8} 4-pool/cache 5-cache-mode/ssd 6-workloads/c_api_tests_with_defaults} 2
pass 6443667 2021-10-14 22:47:43 2021-10-14 22:48:13 2021-10-14 23:32:13 0:44:00 0:27:10 0:16:50 smithi master centos 8.3 rbd/librbd/{cache/none clusters/{fixed-3 openstack} config/copy-on-read min-compat-client/default msgr-failures/few objectstore/bluestore-comp-lz4 pool/none supported-random-distro$/{centos_8} workloads/c_api_tests} 3
dead 6443668 2021-10-14 22:47:44 2021-10-14 22:48:11 2021-10-15 10:53:18 12:05:07 smithi master rhel 8.4 rbd/librbd/{cache/writearound clusters/{fixed-3 openstack} config/none min-compat-client/octopus msgr-failures/few objectstore/bluestore-comp-snappy pool/replicated-data-pool supported-random-distro$/{rhel_8} workloads/c_api_tests_with_defaults} 3
pass 6443669 2021-10-14 22:47:45 2021-10-14 22:48:14 2021-10-14 23:48:17 1:00:03 0:44:25 0:15:38 smithi master centos 8.3 rbd/valgrind/{base/install centos_latest clusters/{fixed-1 openstack} objectstore/filestore-xfs validator/memcheck workloads/c_api_tests} 1
pass 6443670 2021-10-14 22:47:46 2021-10-14 22:48:12 2021-10-14 23:50:17 1:02:05 0:52:49 0:09:16 smithi master rhel 8.4 rbd/librbd/{cache/writeback clusters/{fixed-3 openstack} config/permit-partial-discard min-compat-client/default msgr-failures/few objectstore/bluestore-comp-zlib pool/small-cache-pool supported-random-distro$/{rhel_8} workloads/c_api_tests_with_journaling} 3
fail 6443671 2021-10-14 22:47:46 2021-10-14 22:48:14 2021-10-14 23:24:19 0:36:05 0:27:07 0:08:58 smithi master ubuntu 20.04 rbd/librbd/{cache/writethrough clusters/{fixed-3 openstack} config/permit-partial-discard min-compat-client/default msgr-failures/few objectstore/bluestore-bitmap pool/ec-data-pool supported-random-distro$/{ubuntu_latest} workloads/c_api_tests} 3
Failure Reason:

Command failed (workunit test rbd/test_librbd.sh) on smithi110 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=73892f6ed4ae1c5f5f9586e11662445f4074a392 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 RBD_FEATURES=1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rbd/test_librbd.sh'

pass 6443672 2021-10-14 22:47:48 2021-10-14 22:48:13 2021-10-14 23:48:16 1:00:03 0:47:56 0:12:07 smithi master centos 8.3 rbd/valgrind/{base/install centos_latest clusters/{fixed-1 openstack} objectstore/bluestore-bitmap validator/memcheck workloads/c_api_tests_with_defaults} 1
fail 6443673 2021-10-14 22:47:49 2021-10-14 22:48:12 2021-10-14 23:34:15 0:46:03 0:38:25 0:07:38 smithi master rhel 8.4 rbd/librbd/{cache/none clusters/{fixed-3 openstack} config/copy-on-read min-compat-client/octopus msgr-failures/few objectstore/bluestore-comp-lz4 pool/none supported-random-distro$/{rhel_8} workloads/c_api_tests_with_defaults} 3
Failure Reason:

"2021-10-14T23:26:43.436052+0000 mon.b (mon.0) 884 : cluster [WRN] Health check failed: Degraded data redundancy: 1/1266 objects degraded (0.079%), 1 pg degraded (PG_DEGRADED)" in cluster log

fail 6443674 2021-10-14 22:47:49 2021-10-14 22:48:16 2021-10-14 23:42:18 0:54:02 0:45:19 0:08:43 smithi master rhel 8.4 rbd/librbd/{cache/writearound clusters/{fixed-3 openstack} config/none min-compat-client/default msgr-failures/few objectstore/bluestore-comp-snappy pool/replicated-data-pool supported-random-distro$/{rhel_8} workloads/c_api_tests_with_journaling} 3
Failure Reason:

Command failed (workunit test rbd/test_librbd.sh) on smithi156 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=73892f6ed4ae1c5f5f9586e11662445f4074a392 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 RBD_FEATURES=125 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rbd/test_librbd.sh'

pass 6443675 2021-10-14 22:47:50 2021-10-14 22:48:14 2021-10-15 00:00:18 1:12:04 1:03:07 0:08:57 smithi master centos 8.3 rbd/valgrind/{base/install centos_latest clusters/{fixed-1 openstack} objectstore/bluestore-comp-lz4 validator/memcheck workloads/c_api_tests_with_journaling} 1
fail 6443676 2021-10-14 22:47:51 2021-10-14 22:48:12 2021-10-14 23:28:18 0:40:06 0:27:22 0:12:44 smithi master centos 8.3 rbd/librbd/{cache/writethrough clusters/{fixed-3 openstack} config/none min-compat-client/default msgr-failures/few objectstore/filestore-xfs pool/ec-data-pool supported-random-distro$/{centos_8} workloads/c_api_tests} 3
Failure Reason:

Command failed (workunit test rbd/test_librbd.sh) on smithi058 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=73892f6ed4ae1c5f5f9586e11662445f4074a392 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 RBD_FEATURES=1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rbd/test_librbd.sh'

fail 6443677 2021-10-14 22:47:52 2021-10-14 22:48:10 2021-10-14 23:29:08 0:40:58 0:28:15 0:12:43 smithi master centos 8.3 rbd/librbd/{cache/none clusters/{fixed-3 openstack} config/permit-partial-discard min-compat-client/octopus msgr-failures/few objectstore/bluestore-bitmap pool/none supported-random-distro$/{centos_8} workloads/c_api_tests_with_defaults} 3
Failure Reason:

"2021-10-14T23:22:26.512042+0000 mon.a (mon.0) 847 : cluster [WRN] Health check failed: Degraded data redundancy: 2/298 objects degraded (0.671%), 1 pg degraded (PG_DEGRADED)" in cluster log

fail 6443678 2021-10-14 22:47:53 2021-10-14 22:48:10 2021-10-14 23:34:30 0:46:20 0:37:21 0:08:59 smithi master ubuntu 20.04 rbd/librbd/{cache/writearound clusters/{fixed-3 openstack} config/copy-on-read min-compat-client/default msgr-failures/few objectstore/bluestore-comp-lz4 pool/replicated-data-pool supported-random-distro$/{ubuntu_latest} workloads/c_api_tests_with_journaling} 3
Failure Reason:

Command failed (workunit test rbd/test_librbd.sh) on smithi076 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=73892f6ed4ae1c5f5f9586e11662445f4074a392 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 RBD_FEATURES=125 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rbd/test_librbd.sh'

fail 6443679 2021-10-14 22:47:54 2021-10-14 22:48:11 2021-10-14 23:30:43 0:42:32 0:29:00 0:13:32 smithi master centos 8.3 rbd/librbd/{cache/writethrough clusters/{fixed-3 openstack} config/copy-on-read min-compat-client/default msgr-failures/few objectstore/bluestore-stupid pool/ec-data-pool supported-random-distro$/{centos_8} workloads/c_api_tests} 3
Failure Reason:

Command failed (workunit test rbd/test_librbd.sh) on smithi188 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=73892f6ed4ae1c5f5f9586e11662445f4074a392 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 RBD_FEATURES=1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rbd/test_librbd.sh'

pass 6443680 2021-10-14 22:47:55 2021-10-14 22:48:11 2021-10-14 23:31:21 0:43:10 0:26:41 0:16:29 smithi master centos 8.3 rbd/librbd/{cache/none clusters/{fixed-3 openstack} config/none min-compat-client/octopus msgr-failures/few objectstore/filestore-xfs pool/none supported-random-distro$/{centos_8} workloads/c_api_tests_with_defaults} 3
fail 6443681 2021-10-14 22:47:55 2021-10-14 22:48:12 2021-10-14 23:45:18 0:57:06 0:44:01 0:13:05 smithi master centos 8.stream rbd/librbd/{cache/writearound clusters/{fixed-3 openstack} config/permit-partial-discard min-compat-client/default msgr-failures/few objectstore/bluestore-bitmap pool/replicated-data-pool supported-random-distro$/{centos_8.stream} workloads/c_api_tests_with_journaling} 3
Failure Reason:

Command failed (workunit test rbd/test_librbd.sh) on smithi094 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=73892f6ed4ae1c5f5f9586e11662445f4074a392 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 RBD_FEATURES=125 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rbd/test_librbd.sh'

fail 6443682 2021-10-14 22:47:56 2021-10-14 22:48:16 2021-10-14 23:29:29 0:41:13 0:27:23 0:13:50 smithi master centos 8.3 rbd/librbd/{cache/writethrough clusters/{fixed-3 openstack} config/permit-partial-discard min-compat-client/default msgr-failures/few objectstore/bluestore-low-osd-mem-target pool/ec-data-pool supported-random-distro$/{centos_8} workloads/c_api_tests} 3
Failure Reason:

Command failed (workunit test rbd/test_librbd.sh) on smithi083 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=73892f6ed4ae1c5f5f9586e11662445f4074a392 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 RBD_FEATURES=1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rbd/test_librbd.sh'