User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Pass | Fail | Dead |
---|---|---|---|---|---|---|---|---|---|---|---|
yuriw | 2023-12-07 16:39:06 | 2023-12-07 16:41:24 | 2023-12-08 05:14:31 | 12:33:07 | rados | wip-yuri-testing-2023-12-06-1240 | smithi | f0c3223 | 9 | 23 | 5 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
fail | 7482207 | 2023-12-07 16:40:29 | 2023-12-07 16:41:21 | 2023-12-07 17:16:21 | 0:35:00 | 0:21:06 | 0:13:54 | smithi | main | centos | 8.stream | rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools agent/off mon_election/connectivity task/test_rgw_multisite} | 3 | |
Failure Reason:
Command failed on smithi012 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:f0c322331348463ad8afb13a3a03e833bee1c39c shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 8b4186e0-9523-11ee-95a2-87774f69a715 -- ceph rgw realm bootstrap -i -' |
||||||||||||||
pass | 7482208 | 2023-12-07 16:40:30 | 2023-12-07 16:41:22 | 2023-12-07 17:20:07 | 0:38:45 | 0:25:41 | 0:13:04 | smithi | main | centos | 9.stream | rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/normal mon_election/connectivity msgr-failures/osd-dispatch-delay objectstore/bluestore-comp-lz4 rados recovery-overrides/{default} supported-random-distro$/{centos_latest} thrashers/careful thrashosds-health workloads/ec-small-objects-fast-read} | 2 | |
pass | 7482209 | 2023-12-07 16:40:31 | 2023-12-07 16:41:22 | 2023-12-07 17:32:42 | 0:51:20 | 0:38:13 | 0:13:07 | smithi | main | centos | 9.stream | rados/valgrind-leaks/{1-start 2-inject-leak/osd centos_latest} | 1 | |
fail | 7482210 | 2023-12-07 16:40:31 | 2023-12-07 16:41:22 | 2023-12-07 17:52:26 | 1:11:04 | 0:57:00 | 0:14:04 | smithi | main | centos | 9.stream | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-zstd rados tasks/mon_recovery validater/valgrind} | 2 | |
Failure Reason:
valgrind error: Leak_StillReachable operator new[](unsigned long) allocate allocate |
||||||||||||||
fail | 7482211 | 2023-12-07 16:40:32 | 2023-12-07 16:41:23 | 2023-12-07 17:09:28 | 0:28:05 | 0:18:20 | 0:09:45 | smithi | main | rhel | 8.6 | rados/cephadm/workunits/{0-distro/rhel_8.6_container_tools_3.0 agent/off mon_election/connectivity task/test_adoption} | 1 | |
Failure Reason:
Command failed on smithi061 with status 1: 'sudo yum -y install ceph' |
||||||||||||||
pass | 7482212 | 2023-12-07 16:40:33 | 2023-12-07 16:41:23 | 2023-12-07 17:08:36 | 0:27:13 | 0:15:06 | 0:12:07 | smithi | main | centos | 9.stream | rados/multimon/{clusters/3 mon_election/classic msgr-failures/few msgr/async-v1only no_pools objectstore/bluestore-hybrid rados supported-random-distro$/{centos_latest} tasks/mon_clock_with_skews} | 2 | |
fail | 7482213 | 2023-12-07 16:40:34 | 2023-12-07 16:41:23 | 2023-12-07 17:28:08 | 0:46:45 | 0:33:56 | 0:12:49 | smithi | main | centos | 8.stream | rados/dashboard/{0-single-container-host debug/mgr mon_election/classic random-objectstore$/{bluestore-comp-zlib} tasks/e2e} | 2 | |
Failure Reason:
Command failed (workunit test cephadm/test_dashboard_e2e.sh) on smithi138 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=f0c322331348463ad8afb13a3a03e833bee1c39c TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_dashboard_e2e.sh' |
||||||||||||||
fail | 7482214 | 2023-12-07 16:40:34 | 2023-12-07 16:41:24 | 2023-12-07 17:07:07 | 0:25:43 | 0:17:11 | 0:08:32 | smithi | main | rhel | 8.6 | rados/cephadm/workunits/{0-distro/rhel_8.6_container_tools_rhel8 agent/on mon_election/classic task/test_ca_signed_key} | 2 | |
Failure Reason:
Command failed on smithi142 with status 1: 'sudo yum -y install ceph' |
||||||||||||||
fail | 7482215 | 2023-12-07 16:40:35 | 2023-12-07 16:41:24 | 2023-12-07 17:08:30 | 0:27:06 | 0:16:05 | 0:11:01 | smithi | main | centos | 9.stream | rados/singleton-nomsgr/{all/lazy_omap_stats_output mon_election/classic rados supported-random-distro$/{centos_latest}} | 1 | |
Failure Reason:
Command crashed: 'sudo TESTDIR=/home/ubuntu/cephtest bash -c ceph_test_lazy_omap_stats' |
||||||||||||||
pass | 7482216 | 2023-12-07 16:40:36 | 2023-12-07 16:41:25 | 2023-12-07 17:32:33 | 0:51:08 | 0:38:36 | 0:12:32 | smithi | main | centos | 8.stream | rados/thrash-old-clients/{0-distro$/{centos_8.stream_container_tools} 0-size-min-size-overrides/2-size-2-min-size 1-install/nautilus-v2only backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/crush-compat mon_election/classic msgr-failures/fastclose rados thrashers/none thrashosds-health workloads/snaps-few-objects} | 3 | |
pass | 7482217 | 2023-12-07 16:40:37 | 2023-12-07 16:41:25 | 2023-12-07 17:15:47 | 0:34:22 | 0:25:00 | 0:09:22 | smithi | main | rhel | 8.6 | rados/cephadm/smoke/{0-distro/rhel_8.6_container_tools_rhel8 0-nvme-loop agent/on fixed-2 mon_election/classic start} | 2 | |
pass | 7482218 | 2023-12-07 16:40:37 | 2023-12-07 16:41:25 | 2023-12-07 17:13:28 | 0:32:03 | 0:18:42 | 0:13:21 | smithi | main | centos | 9.stream | rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/many msgr/async-v1only objectstore/bluestore-comp-zstd rados supported-random-distro$/{centos_latest} tasks/libcephsqlite} | 2 | |
fail | 7482219 | 2023-12-07 16:40:38 | 2023-12-07 16:41:26 | 2023-12-07 18:55:47 | 2:14:21 | 2:02:12 | 0:12:09 | smithi | main | centos | 9.stream | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-low-osd-mem-target rados tasks/rados_cls_all validater/valgrind} | 2 | |
Failure Reason:
valgrind error: Leak_StillReachable operator new[](unsigned long) allocate allocate |
||||||||||||||
fail | 7482220 | 2023-12-07 16:40:39 | 2023-12-07 16:41:26 | 2023-12-07 17:09:37 | 0:28:11 | 0:18:00 | 0:10:11 | smithi | main | rhel | 8.6 | rados/cephadm/workunits/{0-distro/rhel_8.6_container_tools_3.0 agent/on mon_election/classic task/test_host_drain} | 3 | |
Failure Reason:
Command failed on smithi148 with status 1: 'sudo yum -y install ceph' |
||||||||||||||
pass | 7482221 | 2023-12-07 16:40:40 | 2023-12-07 16:41:26 | 2023-12-07 17:15:26 | 0:34:00 | 0:21:48 | 0:12:12 | smithi | main | centos | 8.stream | rados/cephadm/workunits/{0-distro/rhel_8.6_container_tools_rhel8 agent/off mon_election/connectivity task/test_iscsi_container/{centos_8.stream_container_tools test_iscsi_container}} | 1 | |
dead | 7482222 | 2023-12-07 16:40:40 | 2023-12-07 16:41:27 | 2023-12-08 04:54:08 | 12:12:41 | smithi | main | centos | 9.stream | rados/singleton-nomsgr/{all/admin_socket_output mon_election/classic rados supported-random-distro$/{centos_latest}} | 1 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
dead | 7482223 | 2023-12-07 16:40:41 | 2023-12-07 16:41:27 | 2023-12-08 04:54:04 | 12:12:37 | smithi | main | rhel | 8.6 | rados/upgrade/parallel/{0-random-distro$/{rhel_8.6_container_tools_rhel8} 0-start 1-tasks mon_election/classic upgrade-sequence workload/{ec-rados-default rados_api rados_loadgenbig rbd_import_export test_rbd_api test_rbd_python}} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 7482224 | 2023-12-07 16:40:42 | 2023-12-07 16:41:28 | 2023-12-07 17:40:13 | 0:58:45 | 0:45:59 | 0:12:46 | smithi | main | centos | 9.stream | rados/valgrind-leaks/{1-start 2-inject-leak/mon centos_latest} | 1 | |
Failure Reason:
reached maximum tries (51) after waiting for 300 seconds |
||||||||||||||
fail | 7482225 | 2023-12-07 16:40:43 | 2023-12-07 16:41:28 | 2023-12-07 17:10:30 | 0:29:02 | 0:18:30 | 0:10:32 | smithi | main | rhel | 8.6 | rados/cephadm/workunits/{0-distro/rhel_8.6_container_tools_3.0 agent/off mon_election/connectivity task/test_rgw_multisite} | 3 | |
Failure Reason:
Command failed on smithi049 with status 1: 'sudo yum -y install ceph' |
||||||||||||||
fail | 7482226 | 2023-12-07 16:40:44 | 2023-12-07 16:41:29 | 2023-12-07 17:07:21 | 0:25:52 | 0:14:35 | 0:11:17 | smithi | main | centos | 9.stream | rados/singleton-nomsgr/{all/ceph-post-file mon_election/classic rados supported-random-distro$/{centos_latest}} | 1 | |
Failure Reason:
Command failed (workunit test post-file.sh) on smithi129 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=f0c322331348463ad8afb13a3a03e833bee1c39c TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/post-file.sh' |
||||||||||||||
fail | 7482227 | 2023-12-07 16:40:44 | 2023-12-07 16:41:29 | 2023-12-07 18:02:20 | 1:20:51 | 1:06:48 | 0:14:03 | smithi | main | centos | 9.stream | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-bitmap rados tasks/rados_api_tests validater/valgrind} | 2 | |
Failure Reason:
valgrind error: Leak_StillReachable operator new[](unsigned long) allocate allocate |
||||||||||||||
fail | 7482228 | 2023-12-07 16:40:45 | 2023-12-07 16:41:29 | 2023-12-07 17:17:56 | 0:36:27 | 0:13:12 | 0:23:15 | smithi | main | rhel | 8.6 | rados/cephadm/workunits/{0-distro/rhel_8.6_container_tools_rhel8 agent/on mon_election/classic task/test_set_mon_crush_locations} | 3 | |
Failure Reason:
Command failed on smithi091 with status 1: 'sudo yum -y install ceph' |
||||||||||||||
fail | 7482229 | 2023-12-07 16:40:46 | 2023-12-07 16:58:42 | 2023-12-07 17:18:43 | 0:20:01 | 0:10:13 | 0:09:48 | smithi | main | ubuntu | 20.04 | rados/cephadm/workunits/{0-distro/ubuntu_20.04 agent/off mon_election/connectivity task/test_adoption} | 1 | |
Failure Reason:
Command failed (workunit test cephadm/test_adoption.sh) on smithi192 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=f0c322331348463ad8afb13a3a03e833bee1c39c TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_adoption.sh' |
||||||||||||||
fail | 7482230 | 2023-12-07 16:40:47 | 2023-12-07 16:58:43 | 2023-12-07 17:41:41 | 0:42:58 | 0:31:54 | 0:11:04 | smithi | main | centos | 8.stream | rados/dashboard/{0-single-container-host debug/mgr mon_election/connectivity random-objectstore$/{bluestore-hybrid} tasks/e2e} | 2 | |
Failure Reason:
Command failed (workunit test cephadm/test_dashboard_e2e.sh) on smithi003 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=f0c322331348463ad8afb13a3a03e833bee1c39c TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_dashboard_e2e.sh' |
||||||||||||||
pass | 7482231 | 2023-12-07 16:40:47 | 2023-12-07 17:00:53 | 2023-12-07 17:28:21 | 0:27:28 | 0:20:47 | 0:06:41 | smithi | main | rhel | 8.6 | rados/cephadm/smoke/{0-distro/rhel_8.6_container_tools_rhel8 0-nvme-loop agent/on fixed-2 mon_election/connectivity start} | 2 | |
fail | 7482232 | 2023-12-07 16:40:48 | 2023-12-07 17:01:24 | 2023-12-07 17:20:00 | 0:18:36 | 0:10:11 | 0:08:25 | smithi | main | centos | 9.stream | rados/singleton-nomsgr/{all/lazy_omap_stats_output mon_election/connectivity rados supported-random-distro$/{centos_latest}} | 1 | |
Failure Reason:
Command crashed: 'sudo TESTDIR=/home/ubuntu/cephtest bash -c ceph_test_lazy_omap_stats' |
||||||||||||||
dead | 7482233 | 2023-12-07 16:40:49 | 2023-12-07 17:01:24 | 2023-12-07 17:20:56 | 0:19:32 | smithi | main | rhel | 8.6 | rados/cephadm/workunits/{0-distro/rhel_8.6_container_tools_rhel8 agent/off mon_election/connectivity task/test_extra_daemon_features} | 2 | |||
Failure Reason:
Error reimaging machines: reached maximum tries (101) after waiting for 600 seconds |
||||||||||||||
fail | 7482234 | 2023-12-07 16:40:50 | 2023-12-07 17:01:35 | 2023-12-07 17:42:42 | 0:41:07 | 0:30:47 | 0:10:20 | smithi | main | centos | 9.stream | rados/valgrind-leaks/{1-start 2-inject-leak/none centos_latest} | 1 | |
Failure Reason:
valgrind error: Leak_StillReachable operator new[](unsigned long) allocate allocate |
||||||||||||||
fail | 7482235 | 2023-12-07 16:40:51 | 2023-12-07 17:02:05 | 2023-12-07 17:55:05 | 0:53:00 | 0:42:50 | 0:10:10 | smithi | main | centos | 9.stream | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-snappy rados tasks/mon_recovery validater/valgrind} | 2 | |
Failure Reason:
valgrind error: Leak_StillReachable operator new[](unsigned long) allocate allocate |
||||||||||||||
fail | 7482236 | 2023-12-07 16:40:51 | 2023-12-07 17:02:06 | 2023-12-07 17:20:47 | 0:18:41 | 0:13:28 | 0:05:13 | smithi | main | rhel | 8.6 | rados/cephadm/workunits/{0-distro/rhel_8.6_container_tools_3.0 agent/off mon_election/connectivity task/test_orch_cli} | 1 | |
Failure Reason:
Command failed on smithi059 with status 1: 'sudo yum -y install ceph' |
||||||||||||||
dead | 7482237 | 2023-12-07 16:40:52 | 2023-12-07 17:02:26 | 2023-12-08 05:12:41 | 12:10:15 | smithi | main | ubuntu | 22.04 | rados/singleton-nomsgr/{all/admin_socket_output mon_election/connectivity rados supported-random-distro$/{ubuntu_latest}} | 1 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
dead | 7482238 | 2023-12-07 16:40:53 | 2023-12-07 17:02:26 | 2023-12-08 05:14:31 | 12:12:05 | smithi | main | ubuntu | 20.04 | rados/upgrade/parallel/{0-random-distro$/{ubuntu_20.04} 0-start 1-tasks mon_election/connectivity upgrade-sequence workload/{ec-rados-default rados_api rados_loadgenbig rbd_import_export test_rbd_api test_rbd_python}} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 7482239 | 2023-12-07 16:40:54 | 2023-12-07 17:03:17 | 2023-12-07 17:27:45 | 0:24:28 | 0:13:15 | 0:11:13 | smithi | main | rhel | 8.6 | rados/cephadm/workunits/{0-distro/rhel_8.6_container_tools_rhel8 agent/on mon_election/classic task/test_orch_cli_mon} | 5 | |
Failure Reason:
Command failed on smithi133 with status 1: 'sudo yum -y install ceph' |
||||||||||||||
pass | 7482240 | 2023-12-07 16:40:54 | 2023-12-07 17:07:18 | 2023-12-07 17:38:23 | 0:31:05 | 0:21:42 | 0:09:23 | smithi | main | centos | 9.stream | rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/normal mon_election/connectivity msgr-failures/osd-dispatch-delay objectstore/bluestore-bitmap rados recovery-overrides/{more-active-recovery} supported-random-distro$/{centos_latest} thrashers/fastread thrashosds-health workloads/ec-small-objects-fast-read} | 2 | |
fail | 7482241 | 2023-12-07 16:40:55 | 2023-12-07 17:07:28 | 2023-12-07 17:39:37 | 0:32:09 | 0:18:43 | 0:13:26 | smithi | main | ubuntu | 20.04 | rados/cephadm/workunits/{0-distro/ubuntu_20.04 agent/off mon_election/connectivity task/test_rgw_multisite} | 3 | |
Failure Reason:
Command failed on smithi032 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:f0c322331348463ad8afb13a3a03e833bee1c39c shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid fb3996d4-9525-11ee-95a2-87774f69a715 -- ceph rgw realm bootstrap -i -' |
||||||||||||||
fail | 7482242 | 2023-12-07 16:40:56 | 2023-12-07 17:08:39 | 2023-12-07 20:55:44 | 3:47:05 | 3:37:36 | 0:09:29 | smithi | main | centos | 9.stream | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-comp-zstd rados tasks/rados_cls_all validater/valgrind} | 2 | |
Failure Reason:
Command failed (workunit test cls/test_cls_2pc_queue.sh) on smithi086 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=f0c322331348463ad8afb13a3a03e833bee1c39c TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_2pc_queue.sh' |
||||||||||||||
fail | 7482243 | 2023-12-07 16:40:57 | 2023-12-07 17:08:39 | 2023-12-07 17:30:26 | 0:21:47 | 0:10:50 | 0:10:57 | smithi | main | ubuntu | 22.04 | rados/singleton-nomsgr/{all/ceph-post-file mon_election/connectivity rados supported-random-distro$/{ubuntu_latest}} | 1 | |
Failure Reason:
Command failed (workunit test post-file.sh) on smithi186 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=f0c322331348463ad8afb13a3a03e833bee1c39c TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/post-file.sh' |