ID
Status
Ceph Branch
Suite Branch
Teuthology Branch
Machine
OS
Nodes
Description
Failure Reason
wip-jewel-20180627
wip-jewel-20180627
master
smithi
 
powercycle/osd/{clusters/3osd-1per-target.yaml objectstore/filestore-xfs.yaml powercycle/default.yaml tasks/cfuse_workunit_suites_ffsb.yaml}
Command failed (workunit test suites/ffsb.sh) on smithi178 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=wip-jewel-20180627 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/ffsb.sh'
wip-jewel-20180627
wip-jewel-20180627
master
smithi
 
powercycle/osd/{clusters/3osd-1per-target.yaml objectstore/filestore-xfs.yaml powercycle/default.yaml tasks/rados_api_tests.yaml}
wip-jewel-20180627
wip-jewel-20180627
master
smithi
 
powercycle/osd/{clusters/3osd-1per-target.yaml objectstore/filestore-xfs.yaml powercycle/default.yaml tasks/radosbench.yaml}
"2018-06-28 22:16:33.599548 osd.0 172.21.15.162:6804/10956 1 : cluster [WRN] 1 slow requests, 1 included below; oldest blocked for > 30.813451 secs" in cluster log
wip-jewel-20180627
wip-jewel-20180627
master
smithi
 
powercycle/osd/{clusters/3osd-1per-target.yaml objectstore/filestore-xfs.yaml powercycle/default.yaml tasks/snaps-few-objects.yaml}
wip-jewel-20180627
wip-jewel-20180627
master
smithi
 
powercycle/osd/{clusters/3osd-1per-target.yaml objectstore/filestore-xfs.yaml powercycle/default.yaml tasks/snaps-many-objects.yaml}
Command crashed: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --max-ops 4000 --objects 500 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op snap_remove 50 --op snap_create 50 --op rollback 50 --op read 100 --op copy_from 50 --op write 50 --op write_excl 50 --op delete 50 --pool unique_pool_0'