User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Fail |
---|---|---|---|---|---|---|---|---|---|
teuthology | 2015-06-17 22:48:23 | 2015-06-17 22:48:33 | 2015-06-18 00:40:53 | 1:52:20 | upgrade:hammer | hammer | vps | — | 54 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
fail | 938382 | 2015-06-17 22:48:27 | 2015-06-17 22:48:33 | 2015-06-17 23:18:35 | 0:30:02 | 0:11:56 | 0:18:06 | vps | master | ubuntu | 12.04 | upgrade:hammer/older/{0-cluster/start.yaml 1-install/v0.94.yaml 2-workload/testrados.yaml 3-upgrade-sequence/upgrade-osd-mon-mds.yaml 4-final/{monthrash.yaml osdthrash.yaml testrados.yaml} distros/ubuntu_12.04.yaml} | 2 | |
Failure Reason:
Could not reconnect to ubuntu@vpm125.front.sepia.ceph.com |
||||||||||||||
fail | 938383 | 2015-06-17 22:48:27 | 2015-06-17 22:48:44 | 2015-06-18 00:40:53 | 1:52:09 | 1:36:31 | 0:15:38 | vps | master | ubuntu | 14.04 | upgrade:hammer/older/{0-cluster/start.yaml 1-install/latest_giant_release.yaml 2-workload/blogbench.yaml 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-final/{monthrash.yaml osdthrash.yaml testrados.yaml} distros/ubuntu_14.04.yaml} | 2 | |
Failure Reason:
Command failed on vpm162 with status 1: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --max-ops 2000 --objects 50 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op snap_remove 50 --op snap_create 50 --op rollback 50 --op read 100 --op write 50 --op write_excl 50 --op delete 50 --pool unique_pool_0' |
||||||||||||||
fail | 938384 | 2015-06-17 22:48:28 | 2015-06-17 22:48:36 | 2015-06-18 00:24:43 | 1:36:07 | 1:22:55 | 0:13:12 | vps | master | ubuntu | 12.04 | upgrade:hammer/newer/{0-cluster/start.yaml 1-install/v0.94.2.yaml 2-workload/rbd.yaml 3-upgrade-sequence/upgrade-osd-mon-mds.yaml 4-final/{monthrash.yaml osdthrash.yaml testrgw.yaml} distros/ubuntu_12.04.yaml} | 2 | |
Failure Reason:
Command failed (s3 tests against rgw) on vpm156 with status 1: "S3TEST_CONF=/home/ubuntu/cephtest/archive/s3-tests.client.1.conf BOTO_CONFIG=/home/ubuntu/cephtest/boto.cfg /home/ubuntu/cephtest/s3-tests/virtualenv/bin/nosetests -w /home/ubuntu/cephtest/s3-tests -v -a '!fails_on_rgw'" |
||||||||||||||
fail | 938385 | 2015-06-17 22:48:28 | 2015-06-17 22:48:33 | 2015-06-17 23:28:36 | 0:40:03 | 0:23:38 | 0:16:25 | vps | master | ubuntu | 14.04 | upgrade:hammer/newer/{0-cluster/start.yaml 1-install/v0.94.2.yaml 2-workload/s3tests.yaml 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-final/{monthrash.yaml osdthrash.yaml testrgw.yaml} distros/ubuntu_14.04.yaml} | 2 | |
Failure Reason:
Command failed (s3 tests against rgw) on vpm111 with status 1: "S3TEST_CONF=/home/ubuntu/cephtest/archive/s3-tests.client.0.conf BOTO_CONFIG=/home/ubuntu/cephtest/boto.cfg /home/ubuntu/cephtest/s3-tests/virtualenv/bin/nosetests -w /home/ubuntu/cephtest/s3-tests -v -a '!fails_on_rgw'" |
||||||||||||||
fail | 938386 | 2015-06-17 22:48:29 | 2015-06-17 22:50:39 | 2015-06-18 00:16:46 | 1:26:07 | 1:11:43 | 0:14:24 | vps | master | ubuntu | 12.04 | upgrade:hammer/older/{0-cluster/start.yaml 1-install/latest_giant_release.yaml 2-workload/blogbench.yaml 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-final/{monthrash.yaml osdthrash.yaml testrados.yaml} distros/ubuntu_12.04.yaml} | 2 | |
Failure Reason:
Command failed on vpm185 with status 1: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --max-ops 2000 --objects 50 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op snap_remove 50 --op snap_create 50 --op rollback 50 --op read 100 --op write 50 --op write_excl 50 --op delete 50 --pool unique_pool_0' |
||||||||||||||
fail | 938387 | 2015-06-17 22:48:29 | 2015-06-17 22:48:39 | 2015-06-17 23:18:41 | 0:30:02 | 0:17:41 | 0:12:21 | vps | master | ubuntu | 14.04 | upgrade:hammer/older/{0-cluster/start.yaml 1-install/v0.94.1.yaml 2-workload/rbd.yaml 3-upgrade-sequence/upgrade-osd-mon-mds.yaml 4-final/{monthrash.yaml osdthrash.yaml testrados.yaml} distros/ubuntu_14.04.yaml} | 2 | |
Failure Reason:
Command failed on vpm135 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install librbd1-dbg=0.94.2-12-g78d894a-1trusty ceph=0.94.2-12-g78d894a-1trusty ceph-test=0.94.2-12-g78d894a-1trusty ceph-dbg=0.94.2-12-g78d894a-1trusty rbd-fuse=0.94.2-12-g78d894a-1trusty librados2-dbg=0.94.2-12-g78d894a-1trusty ceph-fuse-dbg=0.94.2-12-g78d894a-1trusty libcephfs-jni=0.94.2-12-g78d894a-1trusty libcephfs1-dbg=0.94.2-12-g78d894a-1trusty radosgw=0.94.2-12-g78d894a-1trusty librados2=0.94.2-12-g78d894a-1trusty libcephfs1=0.94.2-12-g78d894a-1trusty ceph-mds=0.94.2-12-g78d894a-1trusty radosgw-dbg=0.94.2-12-g78d894a-1trusty librbd1=0.94.2-12-g78d894a-1trusty python-ceph=0.94.2-12-g78d894a-1trusty ceph-test-dbg=0.94.2-12-g78d894a-1trusty ceph-fuse=0.94.2-12-g78d894a-1trusty ceph-common=0.94.2-12-g78d894a-1trusty libcephfs-java=0.94.2-12-g78d894a-1trusty ceph-common-dbg=0.94.2-12-g78d894a-1trusty ceph-mds-dbg=0.94.2-12-g78d894a-1trusty' |
||||||||||||||
fail | 938388 | 2015-06-17 22:48:29 | 2015-06-17 22:49:11 | 2015-06-17 23:27:13 | 0:38:02 | 0:24:12 | 0:13:50 | vps | master | ubuntu | 12.04 | upgrade:hammer/older/{0-cluster/start.yaml 1-install/v0.94.1.yaml 2-workload/rbd.yaml 3-upgrade-sequence/upgrade-osd-mon-mds.yaml 4-final/{monthrash.yaml osdthrash.yaml testrados.yaml} distros/ubuntu_12.04.yaml} | 2 | |
Failure Reason:
Command failed on vpm129 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install librbd1-dbg=0.94.2-12-g78d894a-1precise ceph=0.94.2-12-g78d894a-1precise ceph-test=0.94.2-12-g78d894a-1precise ceph-dbg=0.94.2-12-g78d894a-1precise rbd-fuse=0.94.2-12-g78d894a-1precise librados2-dbg=0.94.2-12-g78d894a-1precise ceph-fuse-dbg=0.94.2-12-g78d894a-1precise libcephfs-jni=0.94.2-12-g78d894a-1precise libcephfs1-dbg=0.94.2-12-g78d894a-1precise radosgw=0.94.2-12-g78d894a-1precise librados2=0.94.2-12-g78d894a-1precise libcephfs1=0.94.2-12-g78d894a-1precise ceph-mds=0.94.2-12-g78d894a-1precise radosgw-dbg=0.94.2-12-g78d894a-1precise librbd1=0.94.2-12-g78d894a-1precise python-ceph=0.94.2-12-g78d894a-1precise ceph-test-dbg=0.94.2-12-g78d894a-1precise ceph-fuse=0.94.2-12-g78d894a-1precise ceph-common=0.94.2-12-g78d894a-1precise libcephfs-java=0.94.2-12-g78d894a-1precise ceph-common-dbg=0.94.2-12-g78d894a-1precise ceph-mds-dbg=0.94.2-12-g78d894a-1precise' |
||||||||||||||
fail | 938389 | 2015-06-17 22:48:30 | 2015-06-17 22:49:03 | 2015-06-17 23:27:05 | 0:38:02 | 0:22:39 | 0:15:23 | vps | master | ubuntu | 14.04 | upgrade:hammer/older/{0-cluster/start.yaml 1-install/v0.94.yaml 2-workload/testrados.yaml 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-final/{monthrash.yaml osdthrash.yaml testrados.yaml} distros/ubuntu_14.04.yaml} | 2 | |
Failure Reason:
Command failed on vpm015 with status 1: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --max-ops 2000 --objects 50 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op snap_remove 50 --op snap_create 50 --op rollback 50 --op read 100 --op write 50 --op write_excl 50 --op delete 50 --pool unique_pool_0' |
||||||||||||||
fail | 938390 | 2015-06-17 22:48:30 | 2015-06-17 22:50:04 | 2015-06-18 00:28:11 | 1:38:07 | 1:22:01 | 0:16:06 | vps | master | ubuntu | 12.04 | upgrade:hammer/newer/{0-cluster/start.yaml 1-install/v0.94.2.yaml 2-workload/blogbench.yaml 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-final/{monthrash.yaml osdthrash.yaml testrgw.yaml} distros/ubuntu_12.04.yaml} | 2 | |
Failure Reason:
Command failed (s3 tests against rgw) on vpm071 with status 1: "S3TEST_CONF=/home/ubuntu/cephtest/archive/s3-tests.client.1.conf BOTO_CONFIG=/home/ubuntu/cephtest/boto.cfg /home/ubuntu/cephtest/s3-tests/virtualenv/bin/nosetests -w /home/ubuntu/cephtest/s3-tests -v -a '!fails_on_rgw'" |
||||||||||||||
fail | 938391 | 2015-06-17 22:48:31 | 2015-06-17 22:50:25 | 2015-06-18 00:28:33 | 1:38:08 | 1:20:08 | 0:18:00 | vps | master | ubuntu | 14.04 | upgrade:hammer/newer/{0-cluster/start.yaml 1-install/v0.94.2.yaml 2-workload/rbd.yaml 3-upgrade-sequence/upgrade-osd-mon-mds.yaml 4-final/{monthrash.yaml osdthrash.yaml testrgw.yaml} distros/ubuntu_14.04.yaml} | 2 | |
Failure Reason:
Command failed on vpm121 with status 128: 'git clone -b wip-11570 git://git.ceph.com/s3-tests.git /home/ubuntu/cephtest/s3-tests' |
||||||||||||||
fail | 938392 | 2015-06-17 22:48:31 | 2015-06-17 22:50:31 | 2015-06-17 23:24:33 | 0:34:02 | 0:18:06 | 0:15:56 | vps | master | ubuntu | 12.04 | upgrade:hammer/older/{0-cluster/start.yaml 1-install/latest_giant_release.yaml 2-workload/testrados.yaml 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-final/{monthrash.yaml osdthrash.yaml testrados.yaml} distros/ubuntu_12.04.yaml} | 2 | |
Failure Reason:
Command failed on vpm098 with status 1: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --max-ops 2000 --objects 50 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op snap_remove 50 --op snap_create 50 --op rollback 50 --op read 100 --op write 50 --op write_excl 50 --op delete 50 --pool unique_pool_0' |
||||||||||||||
fail | 938393 | 2015-06-17 22:48:32 | 2015-06-17 22:50:50 | 2015-06-17 23:28:52 | 0:38:02 | 0:20:35 | 0:17:27 | vps | master | ubuntu | 14.04 | upgrade:hammer/older/{0-cluster/start.yaml 1-install/v0.94.1.yaml 2-workload/blogbench.yaml 3-upgrade-sequence/upgrade-osd-mon-mds.yaml 4-final/{monthrash.yaml osdthrash.yaml testrados.yaml} distros/ubuntu_14.04.yaml} | 2 | |
Failure Reason:
Command failed on vpm147 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install librbd1-dbg=0.94.2-12-g78d894a-1trusty ceph=0.94.2-12-g78d894a-1trusty ceph-test=0.94.2-12-g78d894a-1trusty ceph-dbg=0.94.2-12-g78d894a-1trusty rbd-fuse=0.94.2-12-g78d894a-1trusty librados2-dbg=0.94.2-12-g78d894a-1trusty ceph-fuse-dbg=0.94.2-12-g78d894a-1trusty libcephfs-jni=0.94.2-12-g78d894a-1trusty libcephfs1-dbg=0.94.2-12-g78d894a-1trusty radosgw=0.94.2-12-g78d894a-1trusty librados2=0.94.2-12-g78d894a-1trusty libcephfs1=0.94.2-12-g78d894a-1trusty ceph-mds=0.94.2-12-g78d894a-1trusty radosgw-dbg=0.94.2-12-g78d894a-1trusty librbd1=0.94.2-12-g78d894a-1trusty python-ceph=0.94.2-12-g78d894a-1trusty ceph-test-dbg=0.94.2-12-g78d894a-1trusty ceph-fuse=0.94.2-12-g78d894a-1trusty ceph-common=0.94.2-12-g78d894a-1trusty libcephfs-java=0.94.2-12-g78d894a-1trusty ceph-common-dbg=0.94.2-12-g78d894a-1trusty ceph-mds-dbg=0.94.2-12-g78d894a-1trusty' |
||||||||||||||
fail | 938394 | 2015-06-17 22:48:32 | 2015-06-17 22:50:17 | 2015-06-17 23:30:20 | 0:40:03 | 0:25:27 | 0:14:36 | vps | master | ubuntu | 12.04 | upgrade:hammer/older/{0-cluster/start.yaml 1-install/v0.94.1.yaml 2-workload/blogbench.yaml 3-upgrade-sequence/upgrade-osd-mon-mds.yaml 4-final/{monthrash.yaml osdthrash.yaml testrados.yaml} distros/ubuntu_12.04.yaml} | 2 | |
Failure Reason:
Command failed on vpm016 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install librbd1-dbg=0.94.2-12-g78d894a-1precise ceph=0.94.2-12-g78d894a-1precise ceph-test=0.94.2-12-g78d894a-1precise ceph-dbg=0.94.2-12-g78d894a-1precise rbd-fuse=0.94.2-12-g78d894a-1precise librados2-dbg=0.94.2-12-g78d894a-1precise ceph-fuse-dbg=0.94.2-12-g78d894a-1precise libcephfs-jni=0.94.2-12-g78d894a-1precise libcephfs1-dbg=0.94.2-12-g78d894a-1precise radosgw=0.94.2-12-g78d894a-1precise librados2=0.94.2-12-g78d894a-1precise libcephfs1=0.94.2-12-g78d894a-1precise ceph-mds=0.94.2-12-g78d894a-1precise radosgw-dbg=0.94.2-12-g78d894a-1precise librbd1=0.94.2-12-g78d894a-1precise python-ceph=0.94.2-12-g78d894a-1precise ceph-test-dbg=0.94.2-12-g78d894a-1precise ceph-fuse=0.94.2-12-g78d894a-1precise ceph-common=0.94.2-12-g78d894a-1precise libcephfs-java=0.94.2-12-g78d894a-1precise ceph-common-dbg=0.94.2-12-g78d894a-1precise ceph-mds-dbg=0.94.2-12-g78d894a-1precise' |
||||||||||||||
fail | 938395 | 2015-06-17 22:48:33 | 2015-06-17 22:48:41 | 2015-06-18 00:22:49 | 1:34:08 | 1:20:24 | 0:13:44 | vps | master | ubuntu | 14.04 | upgrade:hammer/older/{0-cluster/start.yaml 1-install/v0.94.yaml 2-workload/rbd.yaml 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-final/{monthrash.yaml osdthrash.yaml testrados.yaml} distros/ubuntu_14.04.yaml} | 2 | |
Failure Reason:
Command failed on vpm140 with status 1: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --max-ops 2000 --objects 50 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op snap_remove 50 --op snap_create 50 --op rollback 50 --op read 100 --op write 50 --op write_excl 50 --op delete 50 --pool unique_pool_0' |
||||||||||||||
fail | 938396 | 2015-06-17 22:48:33 | 2015-06-17 22:50:23 | 2015-06-17 23:24:25 | 0:34:02 | 0:20:37 | 0:13:25 | vps | master | ubuntu | 12.04 | upgrade:hammer/newer/{0-cluster/start.yaml 1-install/v0.94.2.yaml 2-workload/testrados.yaml 3-upgrade-sequence/upgrade-osd-mon-mds.yaml 4-final/{monthrash.yaml osdthrash.yaml testrgw.yaml} distros/ubuntu_12.04.yaml} | 2 | |
Failure Reason:
Command failed on vpm142 with status 1: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --max-ops 2000 --objects 50 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op snap_remove 50 --op snap_create 50 --op rollback 50 --op read 100 --op write 50 --op write_excl 50 --op delete 50 --pool unique_pool_0' |
||||||||||||||
fail | 938397 | 2015-06-17 22:48:34 | 2015-06-17 22:49:13 | 2015-06-18 00:21:20 | 1:32:07 | 1:17:05 | 0:15:02 | vps | master | ubuntu | 14.04 | upgrade:hammer/newer/{0-cluster/start.yaml 1-install/v0.94.2.yaml 2-workload/blogbench.yaml 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-final/{monthrash.yaml osdthrash.yaml testrgw.yaml} distros/ubuntu_14.04.yaml} | 2 | |
Failure Reason:
Command failed on vpm128 with status 128: 'git clone -b wip-11570 git://git.ceph.com/s3-tests.git /home/ubuntu/cephtest/s3-tests' |
||||||||||||||
fail | 938398 | 2015-06-17 22:48:34 | 2015-06-17 22:49:24 | 2015-06-18 00:15:30 | 1:26:06 | 1:10:02 | 0:16:04 | vps | master | ubuntu | 12.04 | upgrade:hammer/older/{0-cluster/start.yaml 1-install/v0.94.yaml 2-workload/rbd.yaml 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-final/{monthrash.yaml osdthrash.yaml testrados.yaml} distros/ubuntu_12.04.yaml} | 2 | |
Failure Reason:
Command failed on vpm046 with status 1: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --max-ops 2000 --objects 50 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op snap_remove 50 --op snap_create 50 --op rollback 50 --op read 100 --op write 50 --op write_excl 50 --op delete 50 --pool unique_pool_0' |
||||||||||||||
fail | 938399 | 2015-06-17 22:48:35 | 2015-06-17 22:49:50 | 2015-06-17 23:23:53 | 0:34:03 | 0:18:55 | 0:15:08 | vps | master | ubuntu | 14.04 | upgrade:hammer/older/{0-cluster/start.yaml 1-install/latest_giant_release.yaml 2-workload/testrados.yaml 3-upgrade-sequence/upgrade-osd-mon-mds.yaml 4-final/{monthrash.yaml osdthrash.yaml testrados.yaml} distros/ubuntu_14.04.yaml} | 2 | |
Failure Reason:
Command failed on vpm104 with status 1: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --max-ops 2000 --objects 50 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op snap_remove 50 --op snap_create 50 --op rollback 50 --op read 100 --op write 50 --op write_excl 50 --op delete 50 --pool unique_pool_0' |
||||||||||||||
fail | 938400 | 2015-06-17 22:48:35 | 2015-06-17 22:48:57 | 2015-06-17 23:47:02 | 0:58:05 | 0:36:54 | 0:21:11 | vps | master | ubuntu | 12.04 | upgrade:hammer/older/{0-cluster/start.yaml 1-install/v0.94.1.yaml 2-workload/testrados.yaml 3-upgrade-sequence/upgrade-osd-mon-mds.yaml 4-final/{monthrash.yaml osdthrash.yaml testrados.yaml} distros/ubuntu_12.04.yaml} | 2 | |
Failure Reason:
Command failed on vpm155 with status 1: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --max-ops 2000 --objects 50 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op snap_remove 50 --op snap_create 50 --op rollback 50 --op read 100 --op write 50 --op write_excl 50 --op delete 50 --pool unique_pool_0' |
||||||||||||||
fail | 938401 | 2015-06-17 22:48:36 | 2015-06-17 22:48:55 | 2015-06-18 00:11:01 | 1:22:06 | 1:08:13 | 0:13:53 | vps | master | ubuntu | 14.04 | upgrade:hammer/older/{0-cluster/start.yaml 1-install/v0.94.yaml 2-workload/blogbench.yaml 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-final/{monthrash.yaml osdthrash.yaml testrados.yaml} distros/ubuntu_14.04.yaml} | 2 | |
Failure Reason:
Command failed on vpm038 with status 1: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --max-ops 2000 --objects 50 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op snap_remove 50 --op snap_create 50 --op rollback 50 --op read 100 --op write 50 --op write_excl 50 --op delete 50 --pool unique_pool_0' |
||||||||||||||
fail | 938402 | 2015-06-17 22:48:36 | 2015-06-17 22:50:20 | 2015-06-17 23:22:22 | 0:32:02 | 0:18:53 | 0:13:09 | vps | master | ubuntu | 12.04 | upgrade:hammer/newer/{0-cluster/start.yaml 1-install/v0.94.2.yaml 2-workload/s3tests.yaml 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-final/{monthrash.yaml osdthrash.yaml testrgw.yaml} distros/ubuntu_12.04.yaml} | 2 | |
Failure Reason:
Command failed (s3 tests against rgw) on vpm054 with status 1: "S3TEST_CONF=/home/ubuntu/cephtest/archive/s3-tests.client.0.conf BOTO_CONFIG=/home/ubuntu/cephtest/boto.cfg /home/ubuntu/cephtest/s3-tests/virtualenv/bin/nosetests -w /home/ubuntu/cephtest/s3-tests -v -a '!fails_on_rgw'" |
||||||||||||||
fail | 938403 | 2015-06-17 22:48:37 | 2015-06-17 22:48:47 | 2015-06-17 23:14:48 | 0:26:01 | 0:12:00 | 0:14:01 | vps | master | ubuntu | 14.04 | upgrade:hammer/newer/{0-cluster/start.yaml 1-install/v0.94.2.yaml 2-workload/testrados.yaml 3-upgrade-sequence/upgrade-osd-mon-mds.yaml 4-final/{monthrash.yaml osdthrash.yaml testrgw.yaml} distros/ubuntu_14.04.yaml} | 2 | |
Failure Reason:
Command failed on vpm099 with status 1: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --max-ops 2000 --objects 50 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op snap_remove 50 --op snap_create 50 --op rollback 50 --op read 100 --op write 50 --op write_excl 50 --op delete 50 --pool unique_pool_0' |
||||||||||||||
fail | 938404 | 2015-06-17 22:48:37 | 2015-06-17 22:50:47 | 2015-06-18 00:08:53 | 1:18:06 | 1:06:09 | 0:11:57 | vps | master | ubuntu | 12.04 | upgrade:hammer/older/{0-cluster/start.yaml 1-install/v0.94.yaml 2-workload/blogbench.yaml 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-final/{monthrash.yaml osdthrash.yaml testrados.yaml} distros/ubuntu_12.04.yaml} | 2 | |
Failure Reason:
Command failed on vpm197 with status 1: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --max-ops 2000 --objects 50 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op snap_remove 50 --op snap_create 50 --op rollback 50 --op read 100 --op write 50 --op write_excl 50 --op delete 50 --pool unique_pool_0' |
||||||||||||||
fail | 938405 | 2015-06-17 22:48:37 | 2015-06-17 22:50:12 | 2015-06-18 00:16:18 | 1:26:06 | 1:11:47 | 0:14:19 | vps | master | ubuntu | 14.04 | upgrade:hammer/older/{0-cluster/start.yaml 1-install/latest_giant_release.yaml 2-workload/rbd.yaml 3-upgrade-sequence/upgrade-osd-mon-mds.yaml 4-final/{monthrash.yaml osdthrash.yaml testrados.yaml} distros/ubuntu_14.04.yaml} | 2 | |
Failure Reason:
Command failed on vpm074 with status 1: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --max-ops 2000 --objects 50 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op snap_remove 50 --op snap_create 50 --op rollback 50 --op read 100 --op write 50 --op write_excl 50 --op delete 50 --pool unique_pool_0' |
||||||||||||||
fail | 938406 | 2015-06-17 22:48:38 | 2015-06-17 22:50:36 | 2015-06-18 00:18:43 | 1:28:07 | 1:12:53 | 0:15:14 | vps | master | ubuntu | 12.04 | upgrade:hammer/older/{0-cluster/start.yaml 1-install/latest_giant_release.yaml 2-workload/rbd.yaml 3-upgrade-sequence/upgrade-osd-mon-mds.yaml 4-final/{monthrash.yaml osdthrash.yaml testrados.yaml} distros/ubuntu_12.04.yaml} | 2 | |
Failure Reason:
Command failed on vpm168 with status 1: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --max-ops 2000 --objects 50 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op snap_remove 50 --op snap_create 50 --op rollback 50 --op read 100 --op write 50 --op write_excl 50 --op delete 50 --pool unique_pool_0' |
||||||||||||||
fail | 938407 | 2015-06-17 22:48:38 | 2015-06-17 22:48:52 | 2015-06-17 23:10:53 | 0:22:01 | 0:09:50 | 0:12:11 | vps | master | ubuntu | 14.04 | upgrade:hammer/older/{0-cluster/start.yaml 1-install/v0.94.1.yaml 2-workload/testrados.yaml 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-final/{monthrash.yaml osdthrash.yaml testrados.yaml} distros/ubuntu_14.04.yaml} | 2 | |
Failure Reason:
Command failed on vpm199 with status 1: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --max-ops 2000 --objects 50 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op snap_remove 50 --op snap_create 50 --op rollback 50 --op read 100 --op write 50 --op write_excl 50 --op delete 50 --pool unique_pool_0' |
||||||||||||||
fail | 938408 | 2015-06-17 22:48:39 | 2015-06-17 22:48:49 | 2015-06-18 00:14:56 | 1:26:07 | 1:12:29 | 0:13:38 | vps | master | ubuntu | 12.04 | upgrade:hammer/newer/{0-cluster/start.yaml 1-install/v0.94.2.yaml 2-workload/rbd.yaml 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-final/{monthrash.yaml osdthrash.yaml testrgw.yaml} distros/ubuntu_12.04.yaml} | 2 | |
Failure Reason:
Command failed (s3 tests against rgw) on vpm146 with status 1: "S3TEST_CONF=/home/ubuntu/cephtest/archive/s3-tests.client.1.conf BOTO_CONFIG=/home/ubuntu/cephtest/boto.cfg /home/ubuntu/cephtest/s3-tests/virtualenv/bin/nosetests -w /home/ubuntu/cephtest/s3-tests -v -a '!fails_on_rgw'" |
||||||||||||||
fail | 938409 | 2015-06-17 22:48:39 | 2015-06-17 22:49:56 | 2015-06-17 23:31:59 | 0:42:03 | 0:25:01 | 0:17:02 | vps | master | ubuntu | 14.04 | upgrade:hammer/newer/{0-cluster/start.yaml 1-install/v0.94.2.yaml 2-workload/s3tests.yaml 3-upgrade-sequence/upgrade-osd-mon-mds.yaml 4-final/{monthrash.yaml osdthrash.yaml testrgw.yaml} distros/ubuntu_14.04.yaml} | 2 | |
Failure Reason:
Command failed (s3 tests against rgw) on vpm177 with status 1: "S3TEST_CONF=/home/ubuntu/cephtest/archive/s3-tests.client.0.conf BOTO_CONFIG=/home/ubuntu/cephtest/boto.cfg /home/ubuntu/cephtest/s3-tests/virtualenv/bin/nosetests -w /home/ubuntu/cephtest/s3-tests -v -a '!fails_on_rgw'" |
||||||||||||||
fail | 938410 | 2015-06-17 22:48:40 | 2015-06-17 22:50:58 | 2015-06-17 23:12:59 | 0:22:01 | 0:09:51 | 0:12:10 | vps | master | ubuntu | 12.04 | upgrade:hammer/older/{0-cluster/start.yaml 1-install/v0.94.yaml 2-workload/testrados.yaml 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-final/{monthrash.yaml osdthrash.yaml testrados.yaml} distros/ubuntu_12.04.yaml} | 2 | |
Failure Reason:
Command failed on vpm172 with status 1: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --max-ops 2000 --objects 50 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op snap_remove 50 --op snap_create 50 --op rollback 50 --op read 100 --op write 50 --op write_excl 50 --op delete 50 --pool unique_pool_0' |
||||||||||||||
fail | 938411 | 2015-06-17 22:48:40 | 2015-06-17 22:49:00 | 2015-06-18 00:21:07 | 1:32:07 | 1:16:09 | 0:15:58 | vps | master | ubuntu | 14.04 | upgrade:hammer/older/{0-cluster/start.yaml 1-install/latest_giant_release.yaml 2-workload/blogbench.yaml 3-upgrade-sequence/upgrade-osd-mon-mds.yaml 4-final/{monthrash.yaml osdthrash.yaml testrados.yaml} distros/ubuntu_14.04.yaml} | 2 | |
Failure Reason:
Command failed on vpm061 with status 1: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --max-ops 2000 --objects 50 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op snap_remove 50 --op snap_create 50 --op rollback 50 --op read 100 --op write 50 --op write_excl 50 --op delete 50 --pool unique_pool_0' |
||||||||||||||
fail | 938412 | 2015-06-17 22:48:41 | 2015-06-17 22:49:05 | 2015-06-18 00:09:11 | 1:20:06 | 1:07:38 | 0:12:28 | vps | master | ubuntu | 12.04 | upgrade:hammer/older/{0-cluster/start.yaml 1-install/latest_giant_release.yaml 2-workload/blogbench.yaml 3-upgrade-sequence/upgrade-osd-mon-mds.yaml 4-final/{monthrash.yaml osdthrash.yaml testrados.yaml} distros/ubuntu_12.04.yaml} | 2 | |
Failure Reason:
Command failed on vpm035 with status 1: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --max-ops 2000 --objects 50 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op snap_remove 50 --op snap_create 50 --op rollback 50 --op read 100 --op write 50 --op write_excl 50 --op delete 50 --pool unique_pool_0' |
||||||||||||||
fail | 938413 | 2015-06-17 22:48:41 | 2015-06-17 22:49:16 | 2015-06-17 23:19:18 | 0:30:02 | 0:16:38 | 0:13:24 | vps | master | ubuntu | 14.04 | upgrade:hammer/older/{0-cluster/start.yaml 1-install/v0.94.1.yaml 2-workload/rbd.yaml 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-final/{monthrash.yaml osdthrash.yaml testrados.yaml} distros/ubuntu_14.04.yaml} | 2 | |
Failure Reason:
Command failed on vpm056 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install librbd1-dbg=0.94.2-12-g78d894a-1trusty ceph=0.94.2-12-g78d894a-1trusty ceph-test=0.94.2-12-g78d894a-1trusty ceph-dbg=0.94.2-12-g78d894a-1trusty rbd-fuse=0.94.2-12-g78d894a-1trusty librados2-dbg=0.94.2-12-g78d894a-1trusty ceph-fuse-dbg=0.94.2-12-g78d894a-1trusty libcephfs-jni=0.94.2-12-g78d894a-1trusty libcephfs1-dbg=0.94.2-12-g78d894a-1trusty radosgw=0.94.2-12-g78d894a-1trusty librados2=0.94.2-12-g78d894a-1trusty libcephfs1=0.94.2-12-g78d894a-1trusty ceph-mds=0.94.2-12-g78d894a-1trusty radosgw-dbg=0.94.2-12-g78d894a-1trusty librbd1=0.94.2-12-g78d894a-1trusty python-ceph=0.94.2-12-g78d894a-1trusty ceph-test-dbg=0.94.2-12-g78d894a-1trusty ceph-fuse=0.94.2-12-g78d894a-1trusty ceph-common=0.94.2-12-g78d894a-1trusty libcephfs-java=0.94.2-12-g78d894a-1trusty ceph-common-dbg=0.94.2-12-g78d894a-1trusty ceph-mds-dbg=0.94.2-12-g78d894a-1trusty' |
||||||||||||||
fail | 938414 | 2015-06-17 22:48:42 | 2015-06-17 22:49:40 | 2015-06-18 00:19:46 | 1:30:06 | 1:16:14 | 0:13:52 | vps | master | ubuntu | 12.04 | upgrade:hammer/newer/{0-cluster/start.yaml 1-install/v0.94.2.yaml 2-workload/blogbench.yaml 3-upgrade-sequence/upgrade-osd-mon-mds.yaml 4-final/{monthrash.yaml osdthrash.yaml testrgw.yaml} distros/ubuntu_12.04.yaml} | 2 | |
Failure Reason:
Command failed (s3 tests against rgw) on vpm043 with status 1: "S3TEST_CONF=/home/ubuntu/cephtest/archive/s3-tests.client.1.conf BOTO_CONFIG=/home/ubuntu/cephtest/boto.cfg /home/ubuntu/cephtest/s3-tests/virtualenv/bin/nosetests -w /home/ubuntu/cephtest/s3-tests -v -a '!fails_on_rgw'" |
||||||||||||||
fail | 938415 | 2015-06-17 22:48:42 | 2015-06-17 22:49:42 | 2015-06-18 00:19:49 | 1:30:07 | 1:16:19 | 0:13:48 | vps | master | ubuntu | 14.04 | upgrade:hammer/newer/{0-cluster/start.yaml 1-install/v0.94.2.yaml 2-workload/rbd.yaml 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-final/{monthrash.yaml osdthrash.yaml testrgw.yaml} distros/ubuntu_14.04.yaml} | 2 | |
Failure Reason:
Command failed on vpm047 with status 128: 'git clone -b wip-11570 git://git.ceph.com/s3-tests.git /home/ubuntu/cephtest/s3-tests' |
||||||||||||||
fail | 938416 | 2015-06-17 22:48:43 | 2015-06-17 22:50:01 | 2015-06-17 23:28:04 | 0:38:03 | 0:21:54 | 0:16:09 | vps | master | ubuntu | 12.04 | upgrade:hammer/older/{0-cluster/start.yaml 1-install/v0.94.1.yaml 2-workload/rbd.yaml 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-final/{monthrash.yaml osdthrash.yaml testrados.yaml} distros/ubuntu_12.04.yaml} | 2 | |
Failure Reason:
Command failed on vpm064 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install librbd1-dbg=0.94.2-12-g78d894a-1precise ceph=0.94.2-12-g78d894a-1precise ceph-test=0.94.2-12-g78d894a-1precise ceph-dbg=0.94.2-12-g78d894a-1precise rbd-fuse=0.94.2-12-g78d894a-1precise librados2-dbg=0.94.2-12-g78d894a-1precise ceph-fuse-dbg=0.94.2-12-g78d894a-1precise libcephfs-jni=0.94.2-12-g78d894a-1precise libcephfs1-dbg=0.94.2-12-g78d894a-1precise radosgw=0.94.2-12-g78d894a-1precise librados2=0.94.2-12-g78d894a-1precise libcephfs1=0.94.2-12-g78d894a-1precise ceph-mds=0.94.2-12-g78d894a-1precise radosgw-dbg=0.94.2-12-g78d894a-1precise librbd1=0.94.2-12-g78d894a-1precise python-ceph=0.94.2-12-g78d894a-1precise ceph-test-dbg=0.94.2-12-g78d894a-1precise ceph-fuse=0.94.2-12-g78d894a-1precise ceph-common=0.94.2-12-g78d894a-1precise libcephfs-java=0.94.2-12-g78d894a-1precise ceph-common-dbg=0.94.2-12-g78d894a-1precise ceph-mds-dbg=0.94.2-12-g78d894a-1precise' |
||||||||||||||
fail | 938417 | 2015-06-17 22:48:43 | 2015-06-17 22:49:45 | 2015-06-17 23:19:47 | 0:30:02 | 0:16:38 | 0:13:24 | vps | master | ubuntu | 14.04 | upgrade:hammer/older/{0-cluster/start.yaml 1-install/v0.94.yaml 2-workload/testrados.yaml 3-upgrade-sequence/upgrade-osd-mon-mds.yaml 4-final/{monthrash.yaml osdthrash.yaml testrados.yaml} distros/ubuntu_14.04.yaml} | 2 | |
Failure Reason:
Command failed on vpm103 with status 1: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --max-ops 2000 --objects 50 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op snap_remove 50 --op snap_create 50 --op rollback 50 --op read 100 --op write 50 --op write_excl 50 --op delete 50 --pool unique_pool_0' |
||||||||||||||
fail | 938418 | 2015-06-17 22:48:44 | 2015-06-17 22:49:34 | 2015-06-17 23:55:39 | 1:06:05 | 0:52:45 | 0:13:20 | vps | master | ubuntu | 12.04 | upgrade:hammer/point-to-point/{point-to-point.yaml distros/ubuntu_12.04.yaml} | 3 | |
Failure Reason:
Command failed (workunit test rados/test.sh) on vpm153 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=78d894a634d727a9367f809a1f57234e5e6935be TESTDIR="/home/ubuntu/cephtest" CEPH_ID="0" PATH=$PATH:/usr/sbin adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/workunit.client.0/rados/test.sh' |
||||||||||||||
fail | 938419 | 2015-06-17 22:48:44 | 2015-06-17 22:50:09 | 2015-06-17 23:24:12 | 0:34:03 | 0:17:04 | 0:16:59 | vps | master | ubuntu | 12.04 | upgrade:hammer/older/{0-cluster/start.yaml 1-install/latest_giant_release.yaml 2-workload/testrados.yaml 3-upgrade-sequence/upgrade-osd-mon-mds.yaml 4-final/{monthrash.yaml osdthrash.yaml testrados.yaml} distros/ubuntu_12.04.yaml} | 2 | |
Failure Reason:
Command failed on vpm058 with status 1: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --max-ops 2000 --objects 50 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op snap_remove 50 --op snap_create 50 --op rollback 50 --op read 100 --op write 50 --op write_excl 50 --op delete 50 --pool unique_pool_0' |
||||||||||||||
fail | 938420 | 2015-06-17 22:48:45 | 2015-06-17 22:49:58 | 2015-06-17 23:32:01 | 0:42:03 | 0:24:04 | 0:17:59 | vps | master | ubuntu | 14.04 | upgrade:hammer/older/{0-cluster/start.yaml 1-install/v0.94.1.yaml 2-workload/blogbench.yaml 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-final/{monthrash.yaml osdthrash.yaml testrados.yaml} distros/ubuntu_14.04.yaml} | 2 | |
Failure Reason:
Command failed on vpm127 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install librbd1-dbg=0.94.2-12-g78d894a-1trusty ceph=0.94.2-12-g78d894a-1trusty ceph-test=0.94.2-12-g78d894a-1trusty ceph-dbg=0.94.2-12-g78d894a-1trusty rbd-fuse=0.94.2-12-g78d894a-1trusty librados2-dbg=0.94.2-12-g78d894a-1trusty ceph-fuse-dbg=0.94.2-12-g78d894a-1trusty libcephfs-jni=0.94.2-12-g78d894a-1trusty libcephfs1-dbg=0.94.2-12-g78d894a-1trusty radosgw=0.94.2-12-g78d894a-1trusty librados2=0.94.2-12-g78d894a-1trusty libcephfs1=0.94.2-12-g78d894a-1trusty ceph-mds=0.94.2-12-g78d894a-1trusty radosgw-dbg=0.94.2-12-g78d894a-1trusty librbd1=0.94.2-12-g78d894a-1trusty python-ceph=0.94.2-12-g78d894a-1trusty ceph-test-dbg=0.94.2-12-g78d894a-1trusty ceph-fuse=0.94.2-12-g78d894a-1trusty ceph-common=0.94.2-12-g78d894a-1trusty libcephfs-java=0.94.2-12-g78d894a-1trusty ceph-common-dbg=0.94.2-12-g78d894a-1trusty ceph-mds-dbg=0.94.2-12-g78d894a-1trusty' |
||||||||||||||
fail | 938421 | 2015-06-17 22:48:45 | 2015-06-17 22:50:34 | 2015-06-17 23:22:36 | 0:32:02 | 0:15:24 | 0:16:38 | vps | master | ubuntu | 12.04 | upgrade:hammer/newer/{0-cluster/start.yaml 1-install/v0.94.2.yaml 2-workload/testrados.yaml 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-final/{monthrash.yaml osdthrash.yaml testrgw.yaml} distros/ubuntu_12.04.yaml} | 2 | |
Failure Reason:
Command failed on vpm097 with status 1: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --max-ops 2000 --objects 50 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op snap_remove 50 --op snap_create 50 --op rollback 50 --op read 100 --op write 50 --op write_excl 50 --op delete 50 --pool unique_pool_0' |
||||||||||||||
fail | 938422 | 2015-06-17 22:48:45 | 2015-06-17 22:49:26 | 2015-06-18 00:15:33 | 1:26:07 | 1:13:11 | 0:12:56 | vps | master | ubuntu | 14.04 | upgrade:hammer/newer/{0-cluster/start.yaml 1-install/v0.94.2.yaml 2-workload/blogbench.yaml 3-upgrade-sequence/upgrade-osd-mon-mds.yaml 4-final/{monthrash.yaml osdthrash.yaml testrgw.yaml} distros/ubuntu_14.04.yaml} | 2 | |
Failure Reason:
Command failed on vpm110 with status 128: 'git clone -b wip-11570 git://git.ceph.com/s3-tests.git /home/ubuntu/cephtest/s3-tests' |
||||||||||||||
fail | 938423 | 2015-06-17 22:48:46 | 2015-06-17 22:50:53 | 2015-06-17 23:26:55 | 0:36:02 | 0:22:43 | 0:13:19 | vps | master | ubuntu | 12.04 | upgrade:hammer/older/{0-cluster/start.yaml 1-install/v0.94.1.yaml 2-workload/blogbench.yaml 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-final/{monthrash.yaml osdthrash.yaml testrados.yaml} distros/ubuntu_12.04.yaml} | 2 | |
Failure Reason:
Command failed on vpm183 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install librbd1-dbg=0.94.2-12-g78d894a-1precise ceph=0.94.2-12-g78d894a-1precise ceph-test=0.94.2-12-g78d894a-1precise ceph-dbg=0.94.2-12-g78d894a-1precise rbd-fuse=0.94.2-12-g78d894a-1precise librados2-dbg=0.94.2-12-g78d894a-1precise ceph-fuse-dbg=0.94.2-12-g78d894a-1precise libcephfs-jni=0.94.2-12-g78d894a-1precise libcephfs1-dbg=0.94.2-12-g78d894a-1precise radosgw=0.94.2-12-g78d894a-1precise librados2=0.94.2-12-g78d894a-1precise libcephfs1=0.94.2-12-g78d894a-1precise ceph-mds=0.94.2-12-g78d894a-1precise radosgw-dbg=0.94.2-12-g78d894a-1precise librbd1=0.94.2-12-g78d894a-1precise python-ceph=0.94.2-12-g78d894a-1precise ceph-test-dbg=0.94.2-12-g78d894a-1precise ceph-fuse=0.94.2-12-g78d894a-1precise ceph-common=0.94.2-12-g78d894a-1precise libcephfs-java=0.94.2-12-g78d894a-1precise ceph-common-dbg=0.94.2-12-g78d894a-1precise ceph-mds-dbg=0.94.2-12-g78d894a-1precise' |
||||||||||||||
fail | 938424 | 2015-06-17 22:48:46 | 2015-06-17 22:50:55 | 2015-06-18 00:31:03 | 1:40:08 | 1:21:06 | 0:19:02 | vps | master | ubuntu | 14.04 | upgrade:hammer/older/{0-cluster/start.yaml 1-install/v0.94.yaml 2-workload/rbd.yaml 3-upgrade-sequence/upgrade-osd-mon-mds.yaml 4-final/{monthrash.yaml osdthrash.yaml testrados.yaml} distros/ubuntu_14.04.yaml} | 2 | |
Failure Reason:
Command failed on vpm161 with status 1: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --max-ops 2000 --objects 50 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op snap_remove 50 --op snap_create 50 --op rollback 50 --op read 100 --op write 50 --op write_excl 50 --op delete 50 --pool unique_pool_0' |
||||||||||||||
fail | 938425 | 2015-06-17 22:48:47 | 2015-06-17 22:49:48 | 2015-06-18 00:19:54 | 1:30:06 | 1:15:18 | 0:14:48 | vps | master | ubuntu | 12.04 | upgrade:hammer/older/{0-cluster/start.yaml 1-install/v0.94.yaml 2-workload/rbd.yaml 3-upgrade-sequence/upgrade-osd-mon-mds.yaml 4-final/{monthrash.yaml osdthrash.yaml testrados.yaml} distros/ubuntu_12.04.yaml} | 2 | |
Failure Reason:
Command failed on vpm136 with status 1: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --max-ops 2000 --objects 50 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op snap_remove 50 --op snap_create 50 --op rollback 50 --op read 100 --op write 50 --op write_excl 50 --op delete 50 --pool unique_pool_0' |
||||||||||||||
fail | 938426 | 2015-06-17 22:48:47 | 2015-06-17 22:49:21 | 2015-06-18 00:07:27 | 1:18:06 | 1:03:30 | 0:14:36 | vps | master | ubuntu | 14.04 | upgrade:hammer/point-to-point/{point-to-point.yaml distros/ubuntu_14.04.yaml} | 3 | |
Failure Reason:
Command failed (workunit test rados/test.sh) on vpm013 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=78d894a634d727a9367f809a1f57234e5e6935be TESTDIR="/home/ubuntu/cephtest" CEPH_ID="0" PATH=$PATH:/usr/sbin adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/workunit.client.0/rados/test.sh' |
||||||||||||||
fail | 938427 | 2015-06-17 22:48:48 | 2015-06-17 22:49:08 | 2015-06-17 23:13:09 | 0:24:01 | 0:08:24 | 0:15:37 | vps | master | ubuntu | 14.04 | upgrade:hammer/older/{0-cluster/start.yaml 1-install/latest_giant_release.yaml 2-workload/testrados.yaml 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-final/{monthrash.yaml osdthrash.yaml testrados.yaml} distros/ubuntu_14.04.yaml} | 2 | |
Failure Reason:
Command failed on vpm101 with status 1: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --max-ops 2000 --objects 50 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op snap_remove 50 --op snap_create 50 --op rollback 50 --op read 100 --op write 50 --op write_excl 50 --op delete 50 --pool unique_pool_0' |
||||||||||||||
fail | 938428 | 2015-06-17 22:48:48 | 2015-06-17 22:50:28 | 2015-06-17 23:28:31 | 0:38:03 | 0:24:45 | 0:13:18 | vps | master | ubuntu | 12.04 | upgrade:hammer/newer/{0-cluster/start.yaml 1-install/v0.94.2.yaml 2-workload/s3tests.yaml 3-upgrade-sequence/upgrade-osd-mon-mds.yaml 4-final/{monthrash.yaml osdthrash.yaml testrgw.yaml} distros/ubuntu_12.04.yaml} | 2 | |
Failure Reason:
Command failed (s3 tests against rgw) on vpm154 with status 1: "S3TEST_CONF=/home/ubuntu/cephtest/archive/s3-tests.client.0.conf BOTO_CONFIG=/home/ubuntu/cephtest/boto.cfg /home/ubuntu/cephtest/s3-tests/virtualenv/bin/nosetests -w /home/ubuntu/cephtest/s3-tests -v -a '!fails_on_rgw'" |
||||||||||||||
fail | 938429 | 2015-06-17 22:48:49 | 2015-06-17 22:49:32 | 2015-06-17 23:27:34 | 0:38:02 | 0:22:13 | 0:15:49 | vps | master | ubuntu | 14.04 | upgrade:hammer/newer/{0-cluster/start.yaml 1-install/v0.94.2.yaml 2-workload/testrados.yaml 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-final/{monthrash.yaml osdthrash.yaml testrgw.yaml} distros/ubuntu_14.04.yaml} | 2 | |
Failure Reason:
Command failed on vpm102 with status 1: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --max-ops 2000 --objects 50 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op snap_remove 50 --op snap_create 50 --op rollback 50 --op read 100 --op write 50 --op write_excl 50 --op delete 50 --pool unique_pool_0' |
||||||||||||||
fail | 938430 | 2015-06-17 22:48:49 | 2015-06-17 22:49:18 | 2015-06-17 23:11:20 | 0:22:02 | 0:08:13 | 0:13:49 | vps | master | ubuntu | 12.04 | upgrade:hammer/older/{0-cluster/start.yaml 1-install/v0.94.1.yaml 2-workload/testrados.yaml 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-final/{monthrash.yaml osdthrash.yaml testrados.yaml} distros/ubuntu_12.04.yaml} | 2 | |
Failure Reason:
Command failed on vpm050 with status 1: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --max-ops 2000 --objects 50 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op snap_remove 50 --op snap_create 50 --op rollback 50 --op read 100 --op write 50 --op write_excl 50 --op delete 50 --pool unique_pool_0' |
||||||||||||||
fail | 938431 | 2015-06-17 22:48:50 | 2015-06-17 22:49:29 | 2015-06-18 00:25:36 | 1:36:07 | 1:17:55 | 0:18:12 | vps | master | ubuntu | 14.04 | upgrade:hammer/older/{0-cluster/start.yaml 1-install/v0.94.yaml 2-workload/blogbench.yaml 3-upgrade-sequence/upgrade-osd-mon-mds.yaml 4-final/{monthrash.yaml osdthrash.yaml testrados.yaml} distros/ubuntu_14.04.yaml} | 2 | |
Failure Reason:
Command failed on vpm057 with status 1: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --max-ops 2000 --objects 50 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op snap_remove 50 --op snap_create 50 --op rollback 50 --op read 100 --op write 50 --op write_excl 50 --op delete 50 --pool unique_pool_0' |
||||||||||||||
fail | 938432 | 2015-06-17 22:48:50 | 2015-06-17 22:50:42 | 2015-06-18 00:18:48 | 1:28:06 | 1:12:43 | 0:15:23 | vps | master | ubuntu | 12.04 | upgrade:hammer/older/{0-cluster/start.yaml 1-install/v0.94.yaml 2-workload/blogbench.yaml 3-upgrade-sequence/upgrade-osd-mon-mds.yaml 4-final/{monthrash.yaml osdthrash.yaml testrados.yaml} distros/ubuntu_12.04.yaml} | 2 | |
Failure Reason:
Command failed on vpm151 with status 1: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --max-ops 2000 --objects 50 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op snap_remove 50 --op snap_create 50 --op rollback 50 --op read 100 --op write 50 --op write_excl 50 --op delete 50 --pool unique_pool_0' |
||||||||||||||
fail | 938433 | 2015-06-17 22:48:51 | 2015-06-17 22:49:37 | 2015-06-18 00:15:44 | 1:26:07 | 1:13:03 | 0:13:04 | vps | master | ubuntu | 14.04 | upgrade:hammer/older/{0-cluster/start.yaml 1-install/latest_giant_release.yaml 2-workload/rbd.yaml 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-final/{monthrash.yaml osdthrash.yaml testrados.yaml} distros/ubuntu_14.04.yaml} | 2 | |
Failure Reason:
Command failed on vpm051 with status 1: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --max-ops 2000 --objects 50 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op snap_remove 50 --op snap_create 50 --op rollback 50 --op read 100 --op write 50 --op write_excl 50 --op delete 50 --pool unique_pool_0' |
||||||||||||||
fail | 938434 | 2015-06-17 22:48:51 | 2015-06-17 22:49:53 | 2015-06-18 00:13:59 | 1:24:06 | 1:10:18 | 0:13:48 | vps | master | ubuntu | 12.04 | upgrade:hammer/older/{0-cluster/start.yaml 1-install/latest_giant_release.yaml 2-workload/rbd.yaml 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-final/{monthrash.yaml osdthrash.yaml testrados.yaml} distros/ubuntu_12.04.yaml} | 2 | |
Failure Reason:
Command failed on vpm044 with status 1: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --max-ops 2000 --objects 50 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op snap_remove 50 --op snap_create 50 --op rollback 50 --op read 100 --op write 50 --op write_excl 50 --op delete 50 --pool unique_pool_0' |
||||||||||||||
fail | 938435 | 2015-06-17 22:48:52 | 2015-06-17 22:50:07 | 2015-06-17 23:28:09 | 0:38:02 | 0:24:04 | 0:13:58 | vps | master | ubuntu | 14.04 | upgrade:hammer/older/{0-cluster/start.yaml 1-install/v0.94.1.yaml 2-workload/testrados.yaml 3-upgrade-sequence/upgrade-osd-mon-mds.yaml 4-final/{monthrash.yaml osdthrash.yaml testrados.yaml} distros/ubuntu_14.04.yaml} | 2 | |
Failure Reason:
Command failed on vpm082 with status 1: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --max-ops 2000 --objects 50 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op snap_remove 50 --op snap_create 50 --op rollback 50 --op read 100 --op write 50 --op write_excl 50 --op delete 50 --pool unique_pool_0' |