Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
fail 495775 2016-10-26 02:36:51 2016-10-26 02:37:20 2016-10-26 03:11:25 0:34:05 0:07:07 0:26:58 mira master rados/thrash-erasure-code/{leveldb.yaml rados.yaml clusters/{fixed-2.yaml openstack.yaml} fast/fast.yaml fs/ext4.yaml msgr-failures/few.yaml thrashers/mapgap.yaml workloads/ec-small-objects-overwrites.yaml} 2
Failure Reason:

Command failed on mira005 with status 1: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --ec-pool --no-sparse --max-ops 400000 --objects 1024 --max-in-flight 64 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 600 --op snap_remove 50 --op snap_create 50 --op rollback 50 --op append_excl 50 --op setattr 25 --op read 100 --op copy_from 50 --op write 50 --op write_excl 50 --op rmattr 25 --op append 50 --op delete 50 --pool unique_pool_0'

fail 495776 2016-10-26 02:36:51 2016-10-26 02:38:14 2016-10-26 03:04:14 0:26:00 0:11:22 0:14:38 mira master rados/thrash-erasure-code/{leveldb.yaml rados.yaml clusters/{fixed-2.yaml openstack.yaml} fast/fast.yaml fs/btrfs.yaml msgr-failures/fastclose.yaml thrashers/pggrow.yaml workloads/ec-snaps-few-objects-overwrites.yaml} 2
Failure Reason:

Command failed on mira061 with status 1: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --ec-pool --no-sparse --max-ops 4000 --objects 50 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op snap_remove 50 --op snap_create 50 --op rollback 50 --op read 100 --op copy_from 50 --op write 50 --op write_excl 50 --op delete 50 --pool unique_pool_0'