Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
fail 2229436 2018-02-26 10:14:18 2018-02-26 10:14:35 2018-02-26 13:30:40 3:16:05 3:09:01 0:07:04 smithi wip-mon-osdmap-prune rados:monthrash/{ceph.yaml clusters/3-mons.yaml mon_kv_backend/leveldb.yaml msgr-failures/few.yaml msgr/simple.yaml objectstore/filestore-xfs.yaml rados.yaml thrashers/sync-many.yaml workloads/rados_api_tests.yaml} 2
Failure Reason:

Command failed (workunit test rados/test.sh) on smithi138 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=wip-mon-osdmap-prune TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test.sh'

fail 2229437 2018-02-26 10:14:18 2018-02-26 10:15:59 2018-02-26 11:07:59 0:52:00 0:42:11 0:09:49 smithi wip-mon-osdmap-prune rados:monthrash/{ceph.yaml clusters/9-mons.yaml mon_kv_backend/rocksdb.yaml msgr-failures/mon-delay.yaml msgr/async.yaml objectstore/bluestore-bitmap.yaml rados.yaml thrashers/sync.yaml workloads/rados_mon_osdmap_prune.yaml} 2
Failure Reason:

Command failed (workunit test mon/test_mon_osdmap_prune.sh) on smithi132 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=wip-mon-osdmap-prune TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/mon/test_mon_osdmap_prune.sh'

fail 2229438 2018-02-26 10:14:19 2018-02-26 10:16:05 2018-02-26 13:34:09 3:18:04 3:09:23 0:08:41 smithi wip-mon-osdmap-prune rados:monthrash/{ceph.yaml clusters/3-mons.yaml mon_kv_backend/leveldb.yaml msgr-failures/few.yaml msgr/random.yaml objectstore/bluestore-comp.yaml rados.yaml thrashers/force-sync-many.yaml workloads/rados_mon_workunits.yaml} 2
Failure Reason:

Command failed (workunit test mon/crush_ops.sh) on smithi194 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=wip-mon-osdmap-prune TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/mon/crush_ops.sh'

dead 2229439 2018-02-26 10:14:20 2018-02-26 10:16:52 2018-02-26 22:19:24 12:02:32 smithi wip-mon-osdmap-prune rados:monthrash/{ceph.yaml clusters/9-mons.yaml mon_kv_backend/rocksdb.yaml msgr-failures/mon-delay.yaml msgr/simple.yaml objectstore/bluestore.yaml rados.yaml thrashers/many.yaml workloads/snaps-few-objects.yaml} 2
dead 2229440 2018-02-26 10:14:21 2018-02-26 10:18:09 2018-02-26 22:20:39 12:02:30 smithi wip-mon-osdmap-prune rados:monthrash/{ceph.yaml clusters/3-mons.yaml mon_kv_backend/leveldb.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/filestore-xfs.yaml rados.yaml thrashers/one.yaml workloads/pool-create-delete.yaml} 2
pass 2229441 2018-02-26 10:14:21 2018-02-26 10:18:09 2018-02-26 10:36:09 0:18:00 0:10:27 0:07:33 smithi wip-mon-osdmap-prune rados:monthrash/{ceph.yaml clusters/9-mons.yaml mon_kv_backend/rocksdb.yaml msgr-failures/mon-delay.yaml msgr/random.yaml objectstore/bluestore-bitmap.yaml rados.yaml thrashers/sync-many.yaml workloads/rados_5925.yaml} 2
fail 2229442 2018-02-26 10:14:22 2018-02-26 10:18:10 2018-02-26 13:38:15 3:20:05 3:09:18 0:10:47 smithi wip-mon-osdmap-prune rados:monthrash/{ceph.yaml clusters/3-mons.yaml mon_kv_backend/leveldb.yaml msgr-failures/few.yaml msgr/simple.yaml objectstore/bluestore-comp.yaml rados.yaml thrashers/sync.yaml workloads/rados_api_tests.yaml} 2
Failure Reason:

Command failed (workunit test rados/test.sh) on smithi113 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=wip-mon-osdmap-prune TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test.sh'

fail 2229443 2018-02-26 10:14:23 2018-02-26 10:20:11 2018-02-26 11:18:14 0:58:03 0:50:34 0:07:29 smithi wip-mon-osdmap-prune rados:monthrash/{ceph.yaml clusters/9-mons.yaml mon_kv_backend/rocksdb.yaml msgr-failures/mon-delay.yaml msgr/async.yaml objectstore/bluestore.yaml rados.yaml thrashers/force-sync-many.yaml workloads/rados_mon_osdmap_prune.yaml} 2
Failure Reason:

Command failed (workunit test mon/test_mon_osdmap_prune.sh) on smithi077 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=wip-mon-osdmap-prune TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/mon/test_mon_osdmap_prune.sh'

fail 2229444 2018-02-26 10:14:24 2018-02-26 10:20:13 2018-02-26 13:42:18 3:22:05 3:10:08 0:11:57 smithi wip-mon-osdmap-prune rados:monthrash/{ceph.yaml clusters/3-mons.yaml mon_kv_backend/leveldb.yaml msgr-failures/few.yaml msgr/random.yaml objectstore/filestore-xfs.yaml rados.yaml thrashers/many.yaml workloads/rados_mon_workunits.yaml} 2
Failure Reason:

Command failed (workunit test mon/crush_ops.sh) on smithi028 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=wip-mon-osdmap-prune TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/mon/crush_ops.sh'

dead 2229445 2018-02-26 10:14:25 2018-02-26 10:20:14 2018-02-26 22:22:45 12:02:31 smithi wip-mon-osdmap-prune rados:monthrash/{ceph.yaml clusters/9-mons.yaml mon_kv_backend/rocksdb.yaml msgr-failures/mon-delay.yaml msgr/simple.yaml objectstore/bluestore-bitmap.yaml rados.yaml thrashers/one.yaml workloads/snaps-few-objects.yaml} 2
dead 2229446 2018-02-26 10:14:26 2018-02-26 10:20:15 2018-02-26 22:22:47 12:02:32 smithi wip-mon-osdmap-prune rados:monthrash/{ceph.yaml clusters/3-mons.yaml mon_kv_backend/leveldb.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore-comp.yaml rados.yaml thrashers/sync-many.yaml workloads/pool-create-delete.yaml} 2
pass 2229447 2018-02-26 10:14:27 2018-02-26 10:22:10 2018-02-26 10:38:10 0:16:00 0:09:45 0:06:15 smithi wip-mon-osdmap-prune rados:monthrash/{ceph.yaml clusters/9-mons.yaml mon_kv_backend/rocksdb.yaml msgr-failures/mon-delay.yaml msgr/random.yaml objectstore/bluestore.yaml rados.yaml thrashers/sync.yaml workloads/rados_5925.yaml} 2
fail 2229448 2018-02-26 10:14:28 2018-02-26 10:22:10 2018-02-26 13:38:15 3:16:05 3:09:06 0:06:59 smithi wip-mon-osdmap-prune rados:monthrash/{ceph.yaml clusters/3-mons.yaml mon_kv_backend/leveldb.yaml msgr-failures/few.yaml msgr/simple.yaml objectstore/filestore-xfs.yaml rados.yaml thrashers/force-sync-many.yaml workloads/rados_api_tests.yaml} 2
Failure Reason:

Command failed (workunit test rados/test.sh) on smithi174 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=wip-mon-osdmap-prune TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test.sh'

fail 2229449 2018-02-26 10:14:28 2018-02-26 10:22:11 2018-02-26 11:16:11 0:54:00 0:47:56 0:06:04 smithi wip-mon-osdmap-prune rados:monthrash/{ceph.yaml clusters/9-mons.yaml mon_kv_backend/rocksdb.yaml msgr-failures/mon-delay.yaml msgr/async.yaml objectstore/bluestore-bitmap.yaml rados.yaml thrashers/many.yaml workloads/rados_mon_osdmap_prune.yaml} 2
Failure Reason:

Command failed (workunit test mon/test_mon_osdmap_prune.sh) on smithi047 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=wip-mon-osdmap-prune TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/mon/test_mon_osdmap_prune.sh'

fail 2229450 2018-02-26 10:14:29 2018-02-26 10:22:24 2018-02-26 13:42:29 3:20:05 3:09:38 0:10:27 smithi wip-mon-osdmap-prune rados:monthrash/{ceph.yaml clusters/3-mons.yaml mon_kv_backend/leveldb.yaml msgr-failures/few.yaml msgr/random.yaml objectstore/bluestore-comp.yaml rados.yaml thrashers/one.yaml workloads/rados_mon_workunits.yaml} 2
Failure Reason:

Command failed (workunit test mon/crush_ops.sh) on smithi062 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=wip-mon-osdmap-prune TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/mon/crush_ops.sh'

dead 2229451 2018-02-26 10:14:30 2018-02-26 10:22:28 2018-02-26 22:25:00 12:02:32 smithi wip-mon-osdmap-prune rados:monthrash/{ceph.yaml clusters/9-mons.yaml mon_kv_backend/rocksdb.yaml msgr-failures/mon-delay.yaml msgr/simple.yaml objectstore/bluestore.yaml rados.yaml thrashers/sync-many.yaml workloads/snaps-few-objects.yaml} 2
dead 2229452 2018-02-26 10:14:31 2018-02-26 10:23:52 2018-02-26 22:26:18 12:02:26 smithi wip-mon-osdmap-prune rados:monthrash/{ceph.yaml clusters/3-mons.yaml mon_kv_backend/leveldb.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/filestore-xfs.yaml rados.yaml thrashers/sync.yaml workloads/pool-create-delete.yaml} 2
pass 2229453 2018-02-26 10:14:32 2018-02-26 10:24:08 2018-02-26 10:44:07 0:19:59 0:09:58 0:10:01 smithi wip-mon-osdmap-prune rados:monthrash/{ceph.yaml clusters/9-mons.yaml mon_kv_backend/rocksdb.yaml msgr-failures/mon-delay.yaml msgr/random.yaml objectstore/bluestore-bitmap.yaml rados.yaml thrashers/force-sync-many.yaml workloads/rados_5925.yaml} 2
fail 2229454 2018-02-26 10:14:32 2018-02-26 10:24:09 2018-02-26 13:48:12 3:24:03 3:08:57 0:15:06 smithi wip-mon-osdmap-prune rados:monthrash/{ceph.yaml clusters/3-mons.yaml mon_kv_backend/leveldb.yaml msgr-failures/few.yaml msgr/simple.yaml objectstore/bluestore-comp.yaml rados.yaml thrashers/many.yaml workloads/rados_api_tests.yaml} 2
Failure Reason:

Command failed (workunit test rados/test.sh) on smithi205 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=wip-mon-osdmap-prune TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test.sh'

fail 2229455 2018-02-26 10:14:33 2018-02-26 10:24:10 2018-02-26 11:24:10 1:00:00 0:47:50 0:12:10 smithi wip-mon-osdmap-prune rados:monthrash/{ceph.yaml clusters/9-mons.yaml mon_kv_backend/rocksdb.yaml msgr-failures/mon-delay.yaml msgr/async.yaml objectstore/bluestore.yaml rados.yaml thrashers/one.yaml workloads/rados_mon_osdmap_prune.yaml} 2
Failure Reason:

Command failed (workunit test mon/test_mon_osdmap_prune.sh) on smithi142 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=wip-mon-osdmap-prune TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/mon/test_mon_osdmap_prune.sh'

fail 2229456 2018-02-26 10:14:34 2018-02-26 10:24:21 2018-02-26 13:44:26 3:20:05 3:09:14 0:10:51 smithi wip-mon-osdmap-prune rados:monthrash/{ceph.yaml clusters/3-mons.yaml mon_kv_backend/leveldb.yaml msgr-failures/few.yaml msgr/random.yaml objectstore/filestore-xfs.yaml rados.yaml thrashers/sync-many.yaml workloads/rados_mon_workunits.yaml} 2
Failure Reason:

Command failed (workunit test mon/crush_ops.sh) on smithi114 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=wip-mon-osdmap-prune TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/mon/crush_ops.sh'

dead 2229457 2018-02-26 10:14:35 2018-02-26 10:26:11 2018-02-26 22:28:42 12:02:31 11:37:52 0:24:39 smithi wip-mon-osdmap-prune rados:monthrash/{ceph.yaml clusters/9-mons.yaml mon_kv_backend/rocksdb.yaml msgr-failures/mon-delay.yaml msgr/simple.yaml objectstore/bluestore-bitmap.yaml rados.yaml thrashers/sync.yaml workloads/snaps-few-objects.yaml} 2
Failure Reason:

SSH connection to smithi099 was lost: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --max-ops 4000 --objects 50 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op snap_remove 50 --op snap_create 50 --op rollback 50 --op read 100 --op copy_from 50 --op write 50 --op write_excl 50 --op delete 50 --pool unique_pool_0'

dead 2229458 2018-02-26 10:14:36 2018-02-26 10:26:59 2018-02-26 22:29:32 12:02:33 smithi wip-mon-osdmap-prune rados:monthrash/{ceph.yaml clusters/3-mons.yaml mon_kv_backend/leveldb.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore-comp.yaml rados.yaml thrashers/force-sync-many.yaml workloads/pool-create-delete.yaml} 2
pass 2229459 2018-02-26 10:14:36 2018-02-26 10:28:11 2018-02-26 10:58:10 0:29:59 0:09:08 0:20:51 smithi wip-mon-osdmap-prune rados:monthrash/{ceph.yaml clusters/9-mons.yaml mon_kv_backend/rocksdb.yaml msgr-failures/mon-delay.yaml msgr/random.yaml objectstore/bluestore.yaml rados.yaml thrashers/many.yaml workloads/rados_5925.yaml} 2
fail 2229460 2018-02-26 10:14:37 2018-02-26 10:28:11 2018-02-26 13:44:15 3:16:04 3:08:43 0:07:21 smithi wip-mon-osdmap-prune rados:monthrash/{ceph.yaml clusters/3-mons.yaml mon_kv_backend/leveldb.yaml msgr-failures/few.yaml msgr/simple.yaml objectstore/filestore-xfs.yaml rados.yaml thrashers/one.yaml workloads/rados_api_tests.yaml} 2
Failure Reason:

Command failed (workunit test rados/test.sh) on smithi043 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=wip-mon-osdmap-prune TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test.sh'

fail 2229461 2018-02-26 10:14:38 2018-02-26 10:28:11 2018-02-26 11:26:11 0:58:00 0:44:41 0:13:19 smithi wip-mon-osdmap-prune rados:monthrash/{ceph.yaml clusters/9-mons.yaml mon_kv_backend/rocksdb.yaml msgr-failures/mon-delay.yaml msgr/async.yaml objectstore/bluestore-bitmap.yaml rados.yaml thrashers/sync-many.yaml workloads/rados_mon_osdmap_prune.yaml} 2
Failure Reason:

Command failed (workunit test mon/test_mon_osdmap_prune.sh) on smithi004 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=wip-mon-osdmap-prune TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/mon/test_mon_osdmap_prune.sh'

fail 2229462 2018-02-26 10:14:39 2018-02-26 10:30:15 2018-02-26 13:46:20 3:16:05 3:09:24 0:06:41 smithi wip-mon-osdmap-prune rados:monthrash/{ceph.yaml clusters/3-mons.yaml mon_kv_backend/leveldb.yaml msgr-failures/few.yaml msgr/random.yaml objectstore/bluestore-comp.yaml rados.yaml thrashers/sync.yaml workloads/rados_mon_workunits.yaml} 2
Failure Reason:

Command failed (workunit test mon/crush_ops.sh) on smithi018 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=wip-mon-osdmap-prune TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/mon/crush_ops.sh'

dead 2229463 2018-02-26 10:14:40 2018-02-26 10:30:15 2018-02-26 22:32:47 12:02:32 smithi wip-mon-osdmap-prune rados:monthrash/{ceph.yaml clusters/9-mons.yaml mon_kv_backend/rocksdb.yaml msgr-failures/mon-delay.yaml msgr/simple.yaml objectstore/bluestore.yaml rados.yaml thrashers/force-sync-many.yaml workloads/snaps-few-objects.yaml} 2
dead 2229464 2018-02-26 10:14:40 2018-02-26 10:32:23 2018-02-26 22:34:52 12:02:29 smithi wip-mon-osdmap-prune rados:monthrash/{ceph.yaml clusters/3-mons.yaml mon_kv_backend/leveldb.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/filestore-xfs.yaml rados.yaml thrashers/many.yaml workloads/pool-create-delete.yaml} 2
pass 2229465 2018-02-26 10:14:41 2018-02-26 10:32:23 2018-02-26 10:54:22 0:21:59 0:08:39 0:13:20 smithi wip-mon-osdmap-prune rados:monthrash/{ceph.yaml clusters/9-mons.yaml mon_kv_backend/rocksdb.yaml msgr-failures/mon-delay.yaml msgr/random.yaml objectstore/bluestore-bitmap.yaml rados.yaml thrashers/one.yaml workloads/rados_5925.yaml} 2
fail 2229466 2018-02-26 10:14:42 2018-02-26 10:32:23 2018-02-26 13:54:26 3:22:03 3:08:58 0:13:05 smithi wip-mon-osdmap-prune rados:monthrash/{ceph.yaml clusters/3-mons.yaml mon_kv_backend/leveldb.yaml msgr-failures/few.yaml msgr/simple.yaml objectstore/bluestore-comp.yaml rados.yaml thrashers/sync-many.yaml workloads/rados_api_tests.yaml} 2
Failure Reason:

Command failed (workunit test rados/test.sh) on smithi141 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=wip-mon-osdmap-prune TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test.sh'

fail 2229467 2018-02-26 10:14:43 2018-02-26 10:32:23 2018-02-26 11:10:22 0:37:59 0:15:07 0:22:52 smithi wip-mon-osdmap-prune rados:monthrash/{ceph.yaml clusters/9-mons.yaml mon_kv_backend/rocksdb.yaml msgr-failures/mon-delay.yaml msgr/async.yaml objectstore/bluestore.yaml rados.yaml thrashers/sync.yaml workloads/rados_mon_osdmap_prune.yaml} 2
Failure Reason:

Command failed (workunit test mon/test_mon_osdmap_prune.sh) on smithi086 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=wip-mon-osdmap-prune TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/mon/test_mon_osdmap_prune.sh'

fail 2229468 2018-02-26 10:14:43 2018-02-26 10:34:18 2018-02-26 13:50:23 3:16:05 3:09:20 0:06:45 smithi wip-mon-osdmap-prune rados:monthrash/{ceph.yaml clusters/3-mons.yaml mon_kv_backend/leveldb.yaml msgr-failures/few.yaml msgr/random.yaml objectstore/filestore-xfs.yaml rados.yaml thrashers/force-sync-many.yaml workloads/rados_mon_workunits.yaml} 2
Failure Reason:

Command failed (workunit test mon/crush_ops.sh) on smithi078 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=wip-mon-osdmap-prune TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/mon/crush_ops.sh'

dead 2229469 2018-02-26 10:14:44 2018-02-26 10:34:19 2018-02-26 22:36:50 12:02:31 smithi wip-mon-osdmap-prune rados:monthrash/{ceph.yaml clusters/9-mons.yaml mon_kv_backend/rocksdb.yaml msgr-failures/mon-delay.yaml msgr/simple.yaml objectstore/bluestore-bitmap.yaml rados.yaml thrashers/many.yaml workloads/snaps-few-objects.yaml} 2
dead 2229470 2018-02-26 10:14:45 2018-02-26 10:34:18 2018-02-26 22:36:50 12:02:32 smithi wip-mon-osdmap-prune rados:monthrash/{ceph.yaml clusters/3-mons.yaml mon_kv_backend/leveldb.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore-comp.yaml rados.yaml thrashers/one.yaml workloads/pool-create-delete.yaml} 2
pass 2229471 2018-02-26 10:14:46 2018-02-26 10:36:20 2018-02-26 10:54:19 0:17:59 0:10:11 0:07:48 smithi wip-mon-osdmap-prune rados:monthrash/{ceph.yaml clusters/9-mons.yaml mon_kv_backend/rocksdb.yaml msgr-failures/mon-delay.yaml msgr/random.yaml objectstore/bluestore.yaml rados.yaml thrashers/sync-many.yaml workloads/rados_5925.yaml} 2
fail 2229472 2018-02-26 10:14:47 2018-02-26 10:36:20 2018-02-26 13:52:24 3:16:04 3:08:48 0:07:16 smithi wip-mon-osdmap-prune rados:monthrash/{ceph.yaml clusters/3-mons.yaml mon_kv_backend/leveldb.yaml msgr-failures/few.yaml msgr/simple.yaml objectstore/filestore-xfs.yaml rados.yaml thrashers/sync.yaml workloads/rados_api_tests.yaml} 2
Failure Reason:

Command failed (workunit test rados/test.sh) on smithi181 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=wip-mon-osdmap-prune TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test.sh'

fail 2229473 2018-02-26 10:14:47 2018-02-26 10:36:24 2018-02-26 11:38:24 1:02:00 0:38:54 0:23:06 smithi wip-mon-osdmap-prune rados:monthrash/{ceph.yaml clusters/9-mons.yaml mon_kv_backend/rocksdb.yaml msgr-failures/mon-delay.yaml msgr/async.yaml objectstore/bluestore-bitmap.yaml rados.yaml thrashers/force-sync-many.yaml workloads/rados_mon_osdmap_prune.yaml} 2
Failure Reason:

Command failed (workunit test mon/test_mon_osdmap_prune.sh) on smithi039 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=wip-mon-osdmap-prune TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/mon/test_mon_osdmap_prune.sh'

fail 2229474 2018-02-26 10:14:48 2018-02-26 10:38:19 2018-02-26 13:58:23 3:20:04 3:10:08 0:09:56 smithi wip-mon-osdmap-prune rados:monthrash/{ceph.yaml clusters/3-mons.yaml mon_kv_backend/leveldb.yaml msgr-failures/few.yaml msgr/random.yaml objectstore/bluestore-comp.yaml rados.yaml thrashers/many.yaml workloads/rados_mon_workunits.yaml} 2
Failure Reason:

Command failed (workunit test mon/crush_ops.sh) on smithi057 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=wip-mon-osdmap-prune TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/mon/crush_ops.sh'

dead 2229475 2018-02-26 10:14:49 2018-02-26 10:38:19 2018-02-26 22:40:50 12:02:31 11:54:49 0:07:42 smithi wip-mon-osdmap-prune rados:monthrash/{ceph.yaml clusters/9-mons.yaml mon_kv_backend/rocksdb.yaml msgr-failures/mon-delay.yaml msgr/simple.yaml objectstore/bluestore.yaml rados.yaml thrashers/one.yaml workloads/snaps-few-objects.yaml} 2
Failure Reason:

SSH connection to smithi016 was lost: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --max-ops 4000 --objects 50 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op snap_remove 50 --op snap_create 50 --op rollback 50 --op read 100 --op copy_from 50 --op write 50 --op write_excl 50 --op delete 50 --pool unique_pool_0'

dead 2229476 2018-02-26 10:14:49 2018-02-26 10:38:19 2018-02-26 22:40:50 12:02:31 smithi wip-mon-osdmap-prune rados:monthrash/{ceph.yaml clusters/3-mons.yaml mon_kv_backend/leveldb.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/filestore-xfs.yaml rados.yaml thrashers/sync-many.yaml workloads/pool-create-delete.yaml} 2
pass 2229477 2018-02-26 10:14:50 2018-02-26 10:40:11 2018-02-26 10:56:10 0:15:59 0:09:21 0:06:38 smithi wip-mon-osdmap-prune rados:monthrash/{ceph.yaml clusters/9-mons.yaml mon_kv_backend/rocksdb.yaml msgr-failures/mon-delay.yaml msgr/random.yaml objectstore/bluestore-bitmap.yaml rados.yaml thrashers/sync.yaml workloads/rados_5925.yaml} 2
fail 2229478 2018-02-26 10:14:51 2018-02-26 10:40:16 2018-02-26 14:14:21 3:34:05 3:08:59 0:25:06 smithi wip-mon-osdmap-prune rados:monthrash/{ceph.yaml clusters/3-mons.yaml mon_kv_backend/leveldb.yaml msgr-failures/few.yaml msgr/simple.yaml objectstore/bluestore-comp.yaml rados.yaml thrashers/force-sync-many.yaml workloads/rados_api_tests.yaml} 2
Failure Reason:

Command failed (workunit test rados/test.sh) on smithi017 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=wip-mon-osdmap-prune TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test.sh'

fail 2229479 2018-02-26 10:14:52 2018-02-26 10:41:05 2018-02-26 11:53:05 1:12:00 0:45:19 0:26:41 smithi wip-mon-osdmap-prune rados:monthrash/{ceph.yaml clusters/9-mons.yaml mon_kv_backend/rocksdb.yaml msgr-failures/mon-delay.yaml msgr/async.yaml objectstore/bluestore.yaml rados.yaml thrashers/many.yaml workloads/rados_mon_osdmap_prune.yaml} 2
Failure Reason:

Command failed (workunit test mon/test_mon_osdmap_prune.sh) on smithi183 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=wip-mon-osdmap-prune TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/mon/test_mon_osdmap_prune.sh'

fail 2229480 2018-02-26 10:14:52 2018-02-26 10:41:26 2018-02-26 14:11:30 3:30:04 3:09:07 0:20:57 smithi wip-mon-osdmap-prune rados:monthrash/{ceph.yaml clusters/3-mons.yaml mon_kv_backend/leveldb.yaml msgr-failures/few.yaml msgr/random.yaml objectstore/filestore-xfs.yaml rados.yaml thrashers/one.yaml workloads/rados_mon_workunits.yaml} 2
Failure Reason:

Command failed (workunit test mon/crush_ops.sh) on smithi100 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=wip-mon-osdmap-prune TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/mon/crush_ops.sh'

dead 2229481 2018-02-26 10:14:53 2018-02-26 10:42:01 2018-02-26 22:44:32 12:02:31 smithi wip-mon-osdmap-prune rados:monthrash/{ceph.yaml clusters/9-mons.yaml mon_kv_backend/rocksdb.yaml msgr-failures/mon-delay.yaml msgr/simple.yaml objectstore/bluestore-bitmap.yaml rados.yaml thrashers/sync-many.yaml workloads/snaps-few-objects.yaml} 2
dead 2229482 2018-02-26 10:14:54 2018-02-26 10:42:13 2018-02-26 22:44:45 12:02:32 smithi wip-mon-osdmap-prune rados:monthrash/{ceph.yaml clusters/3-mons.yaml mon_kv_backend/leveldb.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore-comp.yaml rados.yaml thrashers/sync.yaml workloads/pool-create-delete.yaml} 2
pass 2229483 2018-02-26 10:14:55 2018-02-26 10:42:35 2018-02-26 11:24:35 0:42:00 0:09:25 0:32:35 smithi wip-mon-osdmap-prune rados:monthrash/{ceph.yaml clusters/9-mons.yaml mon_kv_backend/rocksdb.yaml msgr-failures/mon-delay.yaml msgr/random.yaml objectstore/bluestore.yaml rados.yaml thrashers/force-sync-many.yaml workloads/rados_5925.yaml} 2
fail 2229484 2018-02-26 10:14:55 2018-02-26 10:42:35 2018-02-26 14:00:41 3:18:06 3:08:34 0:09:32 smithi wip-mon-osdmap-prune rados:monthrash/{ceph.yaml clusters/3-mons.yaml mon_kv_backend/leveldb.yaml msgr-failures/few.yaml msgr/simple.yaml objectstore/filestore-xfs.yaml rados.yaml thrashers/many.yaml workloads/rados_api_tests.yaml} 2
Failure Reason:

Command failed (workunit test rados/test.sh) on smithi195 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=wip-mon-osdmap-prune TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test.sh'

fail 2229485 2018-02-26 10:14:56 2018-02-26 10:44:11 2018-02-26 11:42:11 0:58:00 0:49:06 0:08:54 smithi wip-mon-osdmap-prune rados:monthrash/{ceph.yaml clusters/9-mons.yaml mon_kv_backend/rocksdb.yaml msgr-failures/mon-delay.yaml msgr/async.yaml objectstore/bluestore-bitmap.yaml rados.yaml thrashers/one.yaml workloads/rados_mon_osdmap_prune.yaml} 2
Failure Reason:

Command failed (workunit test mon/test_mon_osdmap_prune.sh) on smithi203 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=wip-mon-osdmap-prune TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/mon/test_mon_osdmap_prune.sh'

fail 2229486 2018-02-26 10:14:57 2018-02-26 10:44:11 2018-02-26 14:02:16 3:18:05 3:09:11 0:08:54 smithi wip-mon-osdmap-prune rados:monthrash/{ceph.yaml clusters/3-mons.yaml mon_kv_backend/leveldb.yaml msgr-failures/few.yaml msgr/random.yaml objectstore/bluestore-comp.yaml rados.yaml thrashers/sync-many.yaml workloads/rados_mon_workunits.yaml} 2
Failure Reason:

Command failed (workunit test mon/crush_ops.sh) on smithi182 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=wip-mon-osdmap-prune TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/mon/crush_ops.sh'

dead 2229487 2018-02-26 10:14:58 2018-02-26 10:44:11 2018-02-26 22:46:43 12:02:32 smithi wip-mon-osdmap-prune rados:monthrash/{ceph.yaml clusters/9-mons.yaml mon_kv_backend/rocksdb.yaml msgr-failures/mon-delay.yaml msgr/simple.yaml objectstore/bluestore.yaml rados.yaml thrashers/sync.yaml workloads/snaps-few-objects.yaml} 2
dead 2229488 2018-02-26 10:14:59 2018-02-26 10:46:25 2018-02-26 22:48:52 12:02:27 smithi wip-mon-osdmap-prune rados:monthrash/{ceph.yaml clusters/3-mons.yaml mon_kv_backend/leveldb.yaml msgr-failures/few.yaml msgr/simple.yaml objectstore/bluestore-bitmap.yaml rados.yaml thrashers/force-sync-many.yaml workloads/pool-create-delete.yaml} 2
pass 2229489 2018-02-26 10:14:59 2018-02-26 10:48:06 2018-02-26 11:10:06 0:22:00 0:08:37 0:13:23 smithi wip-mon-osdmap-prune rados:monthrash/{ceph.yaml clusters/9-mons.yaml mon_kv_backend/rocksdb.yaml msgr-failures/mon-delay.yaml msgr/async.yaml objectstore/bluestore-comp.yaml rados.yaml thrashers/many.yaml workloads/rados_5925.yaml} 2
fail 2229490 2018-02-26 10:15:00 2018-02-26 10:48:07 2018-02-26 14:12:12 3:24:05 3:08:52 0:15:13 smithi wip-mon-osdmap-prune rados:monthrash/{ceph.yaml clusters/3-mons.yaml mon_kv_backend/leveldb.yaml msgr-failures/few.yaml msgr/random.yaml objectstore/bluestore.yaml rados.yaml thrashers/one.yaml workloads/rados_api_tests.yaml} 2
Failure Reason:

Command failed (workunit test rados/test.sh) on smithi088 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=wip-mon-osdmap-prune TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test.sh'

fail 2229491 2018-02-26 10:15:01 2018-02-26 10:48:15 2018-02-26 11:18:16 0:30:01 0:14:52 0:15:09 smithi wip-mon-osdmap-prune rados:monthrash/{ceph.yaml clusters/9-mons.yaml mon_kv_backend/rocksdb.yaml msgr-failures/mon-delay.yaml msgr/simple.yaml objectstore/filestore-xfs.yaml rados.yaml thrashers/sync-many.yaml workloads/rados_mon_osdmap_prune.yaml} 2
Failure Reason:

Command failed (workunit test mon/test_mon_osdmap_prune.sh) on smithi177 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=wip-mon-osdmap-prune TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/mon/test_mon_osdmap_prune.sh'

fail 2229492 2018-02-26 10:15:02 2018-02-26 10:50:23 2018-02-26 14:10:27 3:20:04 3:09:32 0:10:32 smithi wip-mon-osdmap-prune rados:monthrash/{ceph.yaml clusters/3-mons.yaml mon_kv_backend/leveldb.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore-bitmap.yaml rados.yaml thrashers/sync.yaml workloads/rados_mon_workunits.yaml} 2
Failure Reason:

Command failed (workunit test mon/crush_ops.sh) on smithi032 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=wip-mon-osdmap-prune TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/mon/crush_ops.sh'

dead 2229493 2018-02-26 10:15:02 2018-02-26 10:50:22 2018-02-26 22:52:54 12:02:32 11:53:24 0:09:08 smithi wip-mon-osdmap-prune rados:monthrash/{ceph.yaml clusters/9-mons.yaml mon_kv_backend/rocksdb.yaml msgr-failures/mon-delay.yaml msgr/random.yaml objectstore/bluestore-comp.yaml rados.yaml thrashers/force-sync-many.yaml workloads/snaps-few-objects.yaml} 2
Failure Reason:

SSH connection to smithi146 was lost: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --max-ops 4000 --objects 50 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op snap_remove 50 --op snap_create 50 --op rollback 50 --op read 100 --op copy_from 50 --op write 50 --op write_excl 50 --op delete 50 --pool unique_pool_0'

dead 2229494 2018-02-26 10:15:03 2018-02-26 10:52:28 2018-02-26 22:55:01 12:02:33 11:34:50 0:27:43 smithi wip-mon-osdmap-prune rados:monthrash/{ceph.yaml clusters/3-mons.yaml mon_kv_backend/leveldb.yaml msgr-failures/few.yaml msgr/simple.yaml objectstore/bluestore.yaml rados.yaml thrashers/many.yaml workloads/pool-create-delete.yaml} 2
Failure Reason:

SSH connection to smithi151 was lost: 'sudo TESTDIR=/home/ubuntu/cephtest bash -c ceph_test_rados_delete_pools_parallel'

pass 2229495 2018-02-26 10:15:04 2018-02-26 10:54:17 2018-02-26 11:26:17 0:32:00 0:09:15 0:22:45 smithi wip-mon-osdmap-prune rados:monthrash/{ceph.yaml clusters/9-mons.yaml mon_kv_backend/rocksdb.yaml msgr-failures/mon-delay.yaml msgr/async.yaml objectstore/filestore-xfs.yaml rados.yaml thrashers/one.yaml workloads/rados_5925.yaml} 2
fail 2229496 2018-02-26 10:15:05 2018-02-26 10:54:20 2018-02-26 14:24:24 3:30:04 3:09:04 0:21:00 smithi wip-mon-osdmap-prune rados:monthrash/{ceph.yaml clusters/3-mons.yaml mon_kv_backend/leveldb.yaml msgr-failures/few.yaml msgr/random.yaml objectstore/bluestore-bitmap.yaml rados.yaml thrashers/sync-many.yaml workloads/rados_api_tests.yaml} 2
Failure Reason:

Command failed (workunit test rados/test.sh) on smithi040 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=wip-mon-osdmap-prune TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test.sh'

fail 2229497 2018-02-26 10:15:05 2018-02-26 10:54:22 2018-02-26 11:48:22 0:54:00 0:39:19 0:14:41 smithi wip-mon-osdmap-prune rados:monthrash/{ceph.yaml clusters/9-mons.yaml mon_kv_backend/rocksdb.yaml msgr-failures/mon-delay.yaml msgr/simple.yaml objectstore/bluestore-comp.yaml rados.yaml thrashers/sync.yaml workloads/rados_mon_osdmap_prune.yaml} 2
Failure Reason:

Command failed (workunit test mon/test_mon_osdmap_prune.sh) on smithi003 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=wip-mon-osdmap-prune TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/mon/test_mon_osdmap_prune.sh'

fail 2229498 2018-02-26 10:15:06 2018-02-26 10:54:23 2018-02-26 14:18:27 3:24:04 3:09:22 0:14:42 smithi wip-mon-osdmap-prune rados:monthrash/{ceph.yaml clusters/3-mons.yaml mon_kv_backend/leveldb.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore.yaml rados.yaml thrashers/force-sync-many.yaml workloads/rados_mon_workunits.yaml} 2
Failure Reason:

Command failed (workunit test mon/crush_ops.sh) on smithi022 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=wip-mon-osdmap-prune TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/mon/crush_ops.sh'

dead 2229499 2018-02-26 10:15:07 2018-02-26 10:56:19 2018-02-26 22:58:51 12:02:32 smithi wip-mon-osdmap-prune rados:monthrash/{ceph.yaml clusters/9-mons.yaml mon_kv_backend/rocksdb.yaml msgr-failures/mon-delay.yaml msgr/random.yaml objectstore/filestore-xfs.yaml rados.yaml thrashers/many.yaml workloads/snaps-few-objects.yaml} 2
dead 2229500 2018-02-26 10:15:08 2018-02-26 10:56:19 2018-02-26 22:58:50 12:02:31 smithi wip-mon-osdmap-prune rados:monthrash/{ceph.yaml clusters/3-mons.yaml mon_kv_backend/leveldb.yaml msgr-failures/few.yaml msgr/simple.yaml objectstore/bluestore-bitmap.yaml rados.yaml thrashers/one.yaml workloads/pool-create-delete.yaml} 2
pass 2229501 2018-02-26 10:15:09 2018-02-26 10:58:19 2018-02-26 11:34:19 0:36:00 0:09:38 0:26:22 smithi wip-mon-osdmap-prune rados:monthrash/{ceph.yaml clusters/9-mons.yaml mon_kv_backend/rocksdb.yaml msgr-failures/mon-delay.yaml msgr/async.yaml objectstore/bluestore-comp.yaml rados.yaml thrashers/sync-many.yaml workloads/rados_5925.yaml} 2
fail 2229502 2018-02-26 10:15:09 2018-02-26 11:00:26 2018-02-26 14:18:31 3:18:05 3:08:54 0:09:11 smithi wip-mon-osdmap-prune rados:monthrash/{ceph.yaml clusters/3-mons.yaml mon_kv_backend/leveldb.yaml msgr-failures/few.yaml msgr/random.yaml objectstore/bluestore.yaml rados.yaml thrashers/sync.yaml workloads/rados_api_tests.yaml} 2
Failure Reason:

Command failed (workunit test rados/test.sh) on smithi034 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=wip-mon-osdmap-prune TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test.sh'

fail 2229503 2018-02-26 10:15:10 2018-02-26 11:02:18 2018-02-26 11:32:17 0:29:59 0:15:26 0:14:33 smithi wip-mon-osdmap-prune rados:monthrash/{ceph.yaml clusters/9-mons.yaml mon_kv_backend/rocksdb.yaml msgr-failures/mon-delay.yaml msgr/simple.yaml objectstore/filestore-xfs.yaml rados.yaml thrashers/force-sync-many.yaml workloads/rados_mon_osdmap_prune.yaml} 2
Failure Reason:

Command failed (workunit test mon/test_mon_osdmap_prune.sh) on smithi201 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=wip-mon-osdmap-prune TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/mon/test_mon_osdmap_prune.sh'