User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Pass | Fail |
---|---|---|---|---|---|---|---|---|---|---|
teuthology | 2015-06-24 14:53:40 | 2015-06-24 14:54:00 | 2015-06-24 16:49:08 | 1:55:08 | upgrade:hammer | hammer | vps | — | 2 | 52 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
fail | 947780 | 2015-06-24 14:53:46 | 2015-06-24 14:54:00 | 2015-06-24 15:52:03 | 0:58:03 | 0:33:42 | 0:24:21 | vps | master | ubuntu | 12.04 | upgrade:hammer/older/{0-cluster/start.yaml 1-install/v0.94.yaml 2-workload/testrados.yaml 3-upgrade-sequence/upgrade-osd-mon-mds.yaml 4-final/{monthrash.yaml osdthrash.yaml testrados.yaml} distros/ubuntu_12.04.yaml} | 2 | |
Failure Reason:
Command failed on vpm156 with status 1: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --max-ops 2000 --objects 50 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op snap_remove 50 --op snap_create 50 --op rollback 50 --op read 100 --op write 50 --op write_excl 50 --op delete 50 --pool unique_pool_0' |
||||||||||||||
fail | 947781 | 2015-06-24 14:53:48 | 2015-06-24 14:54:00 | 2015-06-24 16:28:06 | 1:34:06 | 1:19:34 | 0:14:32 | vps | master | ubuntu | 14.04 | upgrade:hammer/older/{0-cluster/start.yaml 1-install/latest_giant_release.yaml 2-workload/blogbench.yaml 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-final/{monthrash.yaml osdthrash.yaml testrados.yaml} distros/ubuntu_14.04.yaml} | 2 | |
Failure Reason:
Command failed on vpm059 with status 1: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --max-ops 2000 --objects 50 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op snap_remove 50 --op snap_create 50 --op rollback 50 --op read 100 --op write 50 --op write_excl 50 --op delete 50 --pool unique_pool_0' |
||||||||||||||
fail | 947782 | 2015-06-24 14:53:49 | 2015-06-24 14:54:20 | 2015-06-24 16:22:26 | 1:28:06 | 1:16:25 | 0:11:41 | vps | master | ubuntu | 12.04 | upgrade:hammer/newer/{0-cluster/start.yaml 1-install/v0.94.2.yaml 2-workload/rbd.yaml 3-upgrade-sequence/upgrade-osd-mon-mds.yaml 4-final/{monthrash.yaml osdthrash.yaml testrgw.yaml} distros/ubuntu_12.04.yaml} | 2 | |
Failure Reason:
Command failed (s3 tests against rgw) on vpm119 with status 1: "S3TEST_CONF=/home/ubuntu/cephtest/archive/s3-tests.client.1.conf BOTO_CONFIG=/home/ubuntu/cephtest/boto.cfg /home/ubuntu/cephtest/s3-tests/virtualenv/bin/nosetests -w /home/ubuntu/cephtest/s3-tests -v -a '!fails_on_rgw'" |
||||||||||||||
fail | 947783 | 2015-06-24 14:53:50 | 2015-06-24 14:54:00 | 2015-06-24 15:32:01 | 0:38:01 | 0:21:45 | 0:16:16 | vps | master | ubuntu | 14.04 | upgrade:hammer/newer/{0-cluster/start.yaml 1-install/v0.94.2.yaml 2-workload/s3tests.yaml 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-final/{monthrash.yaml osdthrash.yaml testrgw.yaml} distros/ubuntu_14.04.yaml} | 2 | |
Failure Reason:
Command failed (s3 tests against rgw) on vpm061 with status 1: "S3TEST_CONF=/home/ubuntu/cephtest/archive/s3-tests.client.0.conf BOTO_CONFIG=/home/ubuntu/cephtest/boto.cfg /home/ubuntu/cephtest/s3-tests/virtualenv/bin/nosetests -w /home/ubuntu/cephtest/s3-tests -v -a '!fails_on_rgw'" |
||||||||||||||
fail | 947784 | 2015-06-24 14:53:51 | 2015-06-24 14:54:02 | 2015-06-24 16:16:07 | 1:22:05 | 1:08:34 | 0:13:31 | vps | master | ubuntu | 12.04 | upgrade:hammer/older/{0-cluster/start.yaml 1-install/latest_giant_release.yaml 2-workload/blogbench.yaml 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-final/{monthrash.yaml osdthrash.yaml testrados.yaml} distros/ubuntu_12.04.yaml} | 2 | |
Failure Reason:
Command failed on vpm076 with status 1: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --max-ops 2000 --objects 50 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op snap_remove 50 --op snap_create 50 --op rollback 50 --op read 100 --op write 50 --op write_excl 50 --op delete 50 --pool unique_pool_0' |
||||||||||||||
fail | 947785 | 2015-06-24 14:53:53 | 2015-06-24 14:54:06 | 2015-06-24 15:46:08 | 0:52:02 | 0:37:22 | 0:14:40 | vps | master | ubuntu | 14.04 | upgrade:hammer/older/{0-cluster/start.yaml 1-install/v0.94.1.yaml 2-workload/rbd.yaml 3-upgrade-sequence/upgrade-osd-mon-mds.yaml 4-final/{monthrash.yaml osdthrash.yaml testrados.yaml} distros/ubuntu_14.04.yaml} | 2 | |
Failure Reason:
Command failed on vpm084 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install librbd1-dbg=0.94.2-12-g78d894a-1trusty ceph=0.94.2-12-g78d894a-1trusty ceph-test=0.94.2-12-g78d894a-1trusty ceph-dbg=0.94.2-12-g78d894a-1trusty rbd-fuse=0.94.2-12-g78d894a-1trusty librados2-dbg=0.94.2-12-g78d894a-1trusty ceph-fuse-dbg=0.94.2-12-g78d894a-1trusty libcephfs-jni=0.94.2-12-g78d894a-1trusty libcephfs1-dbg=0.94.2-12-g78d894a-1trusty radosgw=0.94.2-12-g78d894a-1trusty librados2=0.94.2-12-g78d894a-1trusty libcephfs1=0.94.2-12-g78d894a-1trusty ceph-mds=0.94.2-12-g78d894a-1trusty radosgw-dbg=0.94.2-12-g78d894a-1trusty librbd1=0.94.2-12-g78d894a-1trusty python-ceph=0.94.2-12-g78d894a-1trusty ceph-test-dbg=0.94.2-12-g78d894a-1trusty ceph-fuse=0.94.2-12-g78d894a-1trusty ceph-common=0.94.2-12-g78d894a-1trusty libcephfs-java=0.94.2-12-g78d894a-1trusty ceph-common-dbg=0.94.2-12-g78d894a-1trusty ceph-mds-dbg=0.94.2-12-g78d894a-1trusty' |
||||||||||||||
fail | 947786 | 2015-06-24 14:53:54 | 2015-06-24 14:54:09 | 2015-06-24 15:26:10 | 0:32:01 | 0:18:47 | 0:13:14 | vps | master | ubuntu | 12.04 | upgrade:hammer/older/{0-cluster/start.yaml 1-install/v0.94.1.yaml 2-workload/rbd.yaml 3-upgrade-sequence/upgrade-osd-mon-mds.yaml 4-final/{monthrash.yaml osdthrash.yaml testrados.yaml} distros/ubuntu_12.04.yaml} | 2 | |
Failure Reason:
Command failed on vpm034 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install librbd1-dbg=0.94.2-12-g78d894a-1precise ceph=0.94.2-12-g78d894a-1precise ceph-test=0.94.2-12-g78d894a-1precise ceph-dbg=0.94.2-12-g78d894a-1precise rbd-fuse=0.94.2-12-g78d894a-1precise librados2-dbg=0.94.2-12-g78d894a-1precise ceph-fuse-dbg=0.94.2-12-g78d894a-1precise libcephfs-jni=0.94.2-12-g78d894a-1precise libcephfs1-dbg=0.94.2-12-g78d894a-1precise radosgw=0.94.2-12-g78d894a-1precise librados2=0.94.2-12-g78d894a-1precise libcephfs1=0.94.2-12-g78d894a-1precise ceph-mds=0.94.2-12-g78d894a-1precise radosgw-dbg=0.94.2-12-g78d894a-1precise librbd1=0.94.2-12-g78d894a-1precise python-ceph=0.94.2-12-g78d894a-1precise ceph-test-dbg=0.94.2-12-g78d894a-1precise ceph-fuse=0.94.2-12-g78d894a-1precise ceph-common=0.94.2-12-g78d894a-1precise libcephfs-java=0.94.2-12-g78d894a-1precise ceph-common-dbg=0.94.2-12-g78d894a-1precise ceph-mds-dbg=0.94.2-12-g78d894a-1precise' |
||||||||||||||
fail | 947787 | 2015-06-24 14:53:55 | 2015-06-24 14:54:13 | 2015-06-24 15:14:13 | 0:20:00 | 0:09:28 | 0:10:32 | vps | master | ubuntu | 14.04 | upgrade:hammer/older/{0-cluster/start.yaml 1-install/v0.94.yaml 2-workload/testrados.yaml 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-final/{monthrash.yaml osdthrash.yaml testrados.yaml} distros/ubuntu_14.04.yaml} | 2 | |
Failure Reason:
Command failed on vpm128 with status 1: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --max-ops 2000 --objects 50 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op snap_remove 50 --op snap_create 50 --op rollback 50 --op read 100 --op write 50 --op write_excl 50 --op delete 50 --pool unique_pool_0' |
||||||||||||||
fail | 947788 | 2015-06-24 14:53:57 | 2015-06-24 14:55:30 | 2015-06-24 16:23:23 | 1:27:53 | 1:13:22 | 0:14:31 | vps | master | ubuntu | 12.04 | upgrade:hammer/newer/{0-cluster/start.yaml 1-install/v0.94.2.yaml 2-workload/blogbench.yaml 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-final/{monthrash.yaml osdthrash.yaml testrgw.yaml} distros/ubuntu_12.04.yaml} | 2 | |
Failure Reason:
Command failed (s3 tests against rgw) on vpm079 with status 1: "S3TEST_CONF=/home/ubuntu/cephtest/archive/s3-tests.client.1.conf BOTO_CONFIG=/home/ubuntu/cephtest/boto.cfg /home/ubuntu/cephtest/s3-tests/virtualenv/bin/nosetests -w /home/ubuntu/cephtest/s3-tests -v -a '!fails_on_rgw'" |
||||||||||||||
fail | 947789 | 2015-06-24 14:53:58 | 2015-06-24 14:54:31 | 2015-06-24 16:26:36 | 1:32:05 | 1:18:24 | 0:13:41 | vps | master | ubuntu | 14.04 | upgrade:hammer/newer/{0-cluster/start.yaml 1-install/v0.94.2.yaml 2-workload/rbd.yaml 3-upgrade-sequence/upgrade-osd-mon-mds.yaml 4-final/{monthrash.yaml osdthrash.yaml testrgw.yaml} distros/ubuntu_14.04.yaml} | 2 | |
Failure Reason:
Command failed on vpm035 with status 128: 'git clone -b wip-11570 git://git.ceph.com/s3-tests.git /home/ubuntu/cephtest/s3-tests' |
||||||||||||||
fail | 947790 | 2015-06-24 14:54:00 | 2015-06-24 14:56:54 | 2015-06-24 15:30:55 | 0:34:01 | 0:16:56 | 0:17:05 | vps | master | ubuntu | 12.04 | upgrade:hammer/older/{0-cluster/start.yaml 1-install/latest_giant_release.yaml 2-workload/testrados.yaml 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-final/{monthrash.yaml osdthrash.yaml testrados.yaml} distros/ubuntu_12.04.yaml} | 2 | |
Failure Reason:
Command failed on vpm058 with status 1: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --max-ops 2000 --objects 50 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op snap_remove 50 --op snap_create 50 --op rollback 50 --op read 100 --op write 50 --op write_excl 50 --op delete 50 --pool unique_pool_0' |
||||||||||||||
fail | 947791 | 2015-06-24 14:54:01 | 2015-06-24 14:54:16 | 2015-06-24 15:30:18 | 0:36:02 | 0:22:07 | 0:13:55 | vps | master | ubuntu | 14.04 | upgrade:hammer/older/{0-cluster/start.yaml 1-install/v0.94.1.yaml 2-workload/blogbench.yaml 3-upgrade-sequence/upgrade-osd-mon-mds.yaml 4-final/{monthrash.yaml osdthrash.yaml testrados.yaml} distros/ubuntu_14.04.yaml} | 2 | |
Failure Reason:
Command failed on vpm159 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install librbd1-dbg=0.94.2-12-g78d894a-1trusty ceph=0.94.2-12-g78d894a-1trusty ceph-test=0.94.2-12-g78d894a-1trusty ceph-dbg=0.94.2-12-g78d894a-1trusty rbd-fuse=0.94.2-12-g78d894a-1trusty librados2-dbg=0.94.2-12-g78d894a-1trusty ceph-fuse-dbg=0.94.2-12-g78d894a-1trusty libcephfs-jni=0.94.2-12-g78d894a-1trusty libcephfs1-dbg=0.94.2-12-g78d894a-1trusty radosgw=0.94.2-12-g78d894a-1trusty librados2=0.94.2-12-g78d894a-1trusty libcephfs1=0.94.2-12-g78d894a-1trusty ceph-mds=0.94.2-12-g78d894a-1trusty radosgw-dbg=0.94.2-12-g78d894a-1trusty librbd1=0.94.2-12-g78d894a-1trusty python-ceph=0.94.2-12-g78d894a-1trusty ceph-test-dbg=0.94.2-12-g78d894a-1trusty ceph-fuse=0.94.2-12-g78d894a-1trusty ceph-common=0.94.2-12-g78d894a-1trusty libcephfs-java=0.94.2-12-g78d894a-1trusty ceph-common-dbg=0.94.2-12-g78d894a-1trusty ceph-mds-dbg=0.94.2-12-g78d894a-1trusty' |
||||||||||||||
fail | 947792 | 2015-06-24 14:54:02 | 2015-06-24 14:55:34 | 2015-06-24 15:41:36 | 0:46:02 | 0:27:06 | 0:18:56 | vps | master | ubuntu | 12.04 | upgrade:hammer/older/{0-cluster/start.yaml 1-install/v0.94.1.yaml 2-workload/blogbench.yaml 3-upgrade-sequence/upgrade-osd-mon-mds.yaml 4-final/{monthrash.yaml osdthrash.yaml testrados.yaml} distros/ubuntu_12.04.yaml} | 2 | |
Failure Reason:
Command failed on vpm043 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install librbd1-dbg=0.94.2-12-g78d894a-1precise ceph=0.94.2-12-g78d894a-1precise ceph-test=0.94.2-12-g78d894a-1precise ceph-dbg=0.94.2-12-g78d894a-1precise rbd-fuse=0.94.2-12-g78d894a-1precise librados2-dbg=0.94.2-12-g78d894a-1precise ceph-fuse-dbg=0.94.2-12-g78d894a-1precise libcephfs-jni=0.94.2-12-g78d894a-1precise libcephfs1-dbg=0.94.2-12-g78d894a-1precise radosgw=0.94.2-12-g78d894a-1precise librados2=0.94.2-12-g78d894a-1precise libcephfs1=0.94.2-12-g78d894a-1precise ceph-mds=0.94.2-12-g78d894a-1precise radosgw-dbg=0.94.2-12-g78d894a-1precise librbd1=0.94.2-12-g78d894a-1precise python-ceph=0.94.2-12-g78d894a-1precise ceph-test-dbg=0.94.2-12-g78d894a-1precise ceph-fuse=0.94.2-12-g78d894a-1precise ceph-common=0.94.2-12-g78d894a-1precise libcephfs-java=0.94.2-12-g78d894a-1precise ceph-common-dbg=0.94.2-12-g78d894a-1precise ceph-mds-dbg=0.94.2-12-g78d894a-1precise' |
||||||||||||||
fail | 947793 | 2015-06-24 14:54:04 | 2015-06-24 14:56:24 | 2015-06-24 16:22:29 | 1:26:05 | 1:12:13 | 0:13:52 | vps | master | ubuntu | 14.04 | upgrade:hammer/older/{0-cluster/start.yaml 1-install/v0.94.yaml 2-workload/rbd.yaml 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-final/{monthrash.yaml osdthrash.yaml testrados.yaml} distros/ubuntu_14.04.yaml} | 2 | |
Failure Reason:
Command failed on vpm195 with status 1: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --max-ops 2000 --objects 50 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op snap_remove 50 --op snap_create 50 --op rollback 50 --op read 100 --op write 50 --op write_excl 50 --op delete 50 --pool unique_pool_0' |
||||||||||||||
fail | 947794 | 2015-06-24 14:54:05 | 2015-06-24 14:54:27 | 2015-06-24 15:16:27 | 0:22:00 | 0:09:07 | 0:12:53 | vps | master | ubuntu | 12.04 | upgrade:hammer/newer/{0-cluster/start.yaml 1-install/v0.94.2.yaml 2-workload/testrados.yaml 3-upgrade-sequence/upgrade-osd-mon-mds.yaml 4-final/{monthrash.yaml osdthrash.yaml testrgw.yaml} distros/ubuntu_12.04.yaml} | 2 | |
Failure Reason:
Command failed on vpm072 with status 1: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --max-ops 2000 --objects 50 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op snap_remove 50 --op snap_create 50 --op rollback 50 --op read 100 --op write 50 --op write_excl 50 --op delete 50 --pool unique_pool_0' |
||||||||||||||
fail | 947795 | 2015-06-24 14:54:07 | 2015-06-24 14:57:19 | 2015-06-24 16:31:25 | 1:34:06 | 1:17:18 | 0:16:48 | vps | master | ubuntu | 14.04 | upgrade:hammer/newer/{0-cluster/start.yaml 1-install/v0.94.2.yaml 2-workload/blogbench.yaml 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-final/{monthrash.yaml osdthrash.yaml testrgw.yaml} distros/ubuntu_14.04.yaml} | 2 | |
Failure Reason:
Command failed on vpm121 with status 128: 'git clone -b wip-11570 git://git.ceph.com/s3-tests.git /home/ubuntu/cephtest/s3-tests' |
||||||||||||||
fail | 947796 | 2015-06-24 14:54:08 | 2015-06-24 14:57:35 | 2015-06-24 16:19:40 | 1:22:05 | 1:08:53 | 0:13:12 | vps | master | ubuntu | 12.04 | upgrade:hammer/older/{0-cluster/start.yaml 1-install/v0.94.yaml 2-workload/rbd.yaml 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-final/{monthrash.yaml osdthrash.yaml testrados.yaml} distros/ubuntu_12.04.yaml} | 2 | |
Failure Reason:
Command failed on vpm167 with status 1: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --max-ops 2000 --objects 50 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op snap_remove 50 --op snap_create 50 --op rollback 50 --op read 100 --op write 50 --op write_excl 50 --op delete 50 --pool unique_pool_0' |
||||||||||||||
fail | 947797 | 2015-06-24 14:54:09 | 2015-06-24 14:55:30 | 2015-06-24 15:31:25 | 0:35:55 | 0:14:00 | 0:21:55 | vps | master | ubuntu | 14.04 | upgrade:hammer/older/{0-cluster/start.yaml 1-install/latest_giant_release.yaml 2-workload/testrados.yaml 3-upgrade-sequence/upgrade-osd-mon-mds.yaml 4-final/{monthrash.yaml osdthrash.yaml testrados.yaml} distros/ubuntu_14.04.yaml} | 2 | |
Failure Reason:
Command failed on vpm087 with status 1: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --max-ops 2000 --objects 50 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op snap_remove 50 --op snap_create 50 --op rollback 50 --op read 100 --op write 50 --op write_excl 50 --op delete 50 --pool unique_pool_0' |
||||||||||||||
fail | 947798 | 2015-06-24 14:54:11 | 2015-06-24 14:55:38 | 2015-06-24 15:13:38 | 0:18:00 | 0:07:43 | 0:10:17 | vps | master | ubuntu | 12.04 | upgrade:hammer/older/{0-cluster/start.yaml 1-install/v0.94.1.yaml 2-workload/testrados.yaml 3-upgrade-sequence/upgrade-osd-mon-mds.yaml 4-final/{monthrash.yaml osdthrash.yaml testrados.yaml} distros/ubuntu_12.04.yaml} | 2 | |
Failure Reason:
Command failed on vpm146 with status 1: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --max-ops 2000 --objects 50 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op snap_remove 50 --op snap_create 50 --op rollback 50 --op read 100 --op write 50 --op write_excl 50 --op delete 50 --pool unique_pool_0' |
||||||||||||||
fail | 947799 | 2015-06-24 14:54:12 | 2015-06-24 14:56:20 | 2015-06-24 16:16:23 | 1:20:03 | 1:08:36 | 0:11:27 | vps | master | ubuntu | 14.04 | upgrade:hammer/older/{0-cluster/start.yaml 1-install/v0.94.yaml 2-workload/blogbench.yaml 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-final/{monthrash.yaml osdthrash.yaml testrados.yaml} distros/ubuntu_14.04.yaml} | 2 | |
Failure Reason:
Command failed on vpm136 with status 1: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --max-ops 2000 --objects 50 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op snap_remove 50 --op snap_create 50 --op rollback 50 --op read 100 --op write 50 --op write_excl 50 --op delete 50 --pool unique_pool_0' |
||||||||||||||
fail | 947800 | 2015-06-24 14:54:14 | 2015-06-24 14:55:30 | 2015-06-24 15:35:28 | 0:39:58 | 0:23:48 | 0:16:10 | vps | master | ubuntu | 12.04 | upgrade:hammer/newer/{0-cluster/start.yaml 1-install/v0.94.2.yaml 2-workload/s3tests.yaml 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-final/{monthrash.yaml osdthrash.yaml testrgw.yaml} distros/ubuntu_12.04.yaml} | 2 | |
Failure Reason:
Command failed (s3 tests against rgw) on vpm056 with status 1: "S3TEST_CONF=/home/ubuntu/cephtest/archive/s3-tests.client.0.conf BOTO_CONFIG=/home/ubuntu/cephtest/boto.cfg /home/ubuntu/cephtest/s3-tests/virtualenv/bin/nosetests -w /home/ubuntu/cephtest/s3-tests -v -a '!fails_on_rgw'" |
||||||||||||||
fail | 947801 | 2015-06-24 14:54:15 | 2015-06-24 14:56:46 | 2015-06-24 15:32:48 | 0:36:02 | 0:19:18 | 0:16:44 | vps | master | ubuntu | 14.04 | upgrade:hammer/newer/{0-cluster/start.yaml 1-install/v0.94.2.yaml 2-workload/testrados.yaml 3-upgrade-sequence/upgrade-osd-mon-mds.yaml 4-final/{monthrash.yaml osdthrash.yaml testrgw.yaml} distros/ubuntu_14.04.yaml} | 2 | |
Failure Reason:
Command failed on vpm065 with status 1: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --max-ops 2000 --objects 50 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op snap_remove 50 --op snap_create 50 --op rollback 50 --op read 100 --op write 50 --op write_excl 50 --op delete 50 --pool unique_pool_0' |
||||||||||||||
fail | 947802 | 2015-06-24 14:54:17 | 2015-06-24 14:54:24 | 2015-06-24 16:36:30 | 1:42:06 | 1:23:54 | 0:18:12 | vps | master | ubuntu | 12.04 | upgrade:hammer/older/{0-cluster/start.yaml 1-install/v0.94.yaml 2-workload/blogbench.yaml 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-final/{monthrash.yaml osdthrash.yaml testrados.yaml} distros/ubuntu_12.04.yaml} | 2 | |
Failure Reason:
Command failed on vpm041 with status 1: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --max-ops 2000 --objects 50 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op snap_remove 50 --op snap_create 50 --op rollback 50 --op read 100 --op write 50 --op write_excl 50 --op delete 50 --pool unique_pool_0' |
||||||||||||||
fail | 947803 | 2015-06-24 14:54:18 | 2015-06-24 14:54:34 | 2015-06-24 16:32:41 | 1:38:07 | 1:19:02 | 0:19:05 | vps | master | ubuntu | 14.04 | upgrade:hammer/older/{0-cluster/start.yaml 1-install/latest_giant_release.yaml 2-workload/rbd.yaml 3-upgrade-sequence/upgrade-osd-mon-mds.yaml 4-final/{monthrash.yaml osdthrash.yaml testrados.yaml} distros/ubuntu_14.04.yaml} | 2 | |
Failure Reason:
Command failed on vpm180 with status 1: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --max-ops 2000 --objects 50 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op snap_remove 50 --op snap_create 50 --op rollback 50 --op read 100 --op write 50 --op write_excl 50 --op delete 50 --pool unique_pool_0' |
||||||||||||||
fail | 947804 | 2015-06-24 14:54:20 | 2015-06-24 14:56:32 | 2015-06-24 16:24:38 | 1:28:06 | 1:12:30 | 0:15:36 | vps | master | ubuntu | 12.04 | upgrade:hammer/older/{0-cluster/start.yaml 1-install/latest_giant_release.yaml 2-workload/rbd.yaml 3-upgrade-sequence/upgrade-osd-mon-mds.yaml 4-final/{monthrash.yaml osdthrash.yaml testrados.yaml} distros/ubuntu_12.04.yaml} | 2 | |
Failure Reason:
Command failed on vpm044 with status 1: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --max-ops 2000 --objects 50 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op snap_remove 50 --op snap_create 50 --op rollback 50 --op read 100 --op write 50 --op write_excl 50 --op delete 50 --pool unique_pool_0' |
||||||||||||||
fail | 947805 | 2015-06-24 14:54:21 | 2015-06-24 14:55:30 | 2015-06-24 15:16:57 | 0:21:27 | 0:10:10 | 0:11:17 | vps | master | ubuntu | 14.04 | upgrade:hammer/older/{0-cluster/start.yaml 1-install/v0.94.1.yaml 2-workload/testrados.yaml 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-final/{monthrash.yaml osdthrash.yaml testrados.yaml} distros/ubuntu_14.04.yaml} | 2 | |
Failure Reason:
Command failed on vpm140 with status 1: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --max-ops 2000 --objects 50 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op snap_remove 50 --op snap_create 50 --op rollback 50 --op read 100 --op write 50 --op write_excl 50 --op delete 50 --pool unique_pool_0' |
||||||||||||||
fail | 947806 | 2015-06-24 14:54:22 | 2015-06-24 14:55:30 | 2015-06-24 16:20:59 | 1:25:29 | 1:14:02 | 0:11:27 | vps | master | ubuntu | 12.04 | upgrade:hammer/newer/{0-cluster/start.yaml 1-install/v0.94.2.yaml 2-workload/rbd.yaml 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-final/{monthrash.yaml osdthrash.yaml testrgw.yaml} distros/ubuntu_12.04.yaml} | 2 | |
Failure Reason:
Command failed (s3 tests against rgw) on vpm038 with status 1: "S3TEST_CONF=/home/ubuntu/cephtest/archive/s3-tests.client.1.conf BOTO_CONFIG=/home/ubuntu/cephtest/boto.cfg /home/ubuntu/cephtest/s3-tests/virtualenv/bin/nosetests -w /home/ubuntu/cephtest/s3-tests -v -a '!fails_on_rgw'" |
||||||||||||||
fail | 947807 | 2015-06-24 14:54:24 | 2015-06-24 14:57:38 | 2015-06-24 15:25:39 | 0:28:01 | 0:17:09 | 0:10:52 | vps | master | ubuntu | 14.04 | upgrade:hammer/newer/{0-cluster/start.yaml 1-install/v0.94.2.yaml 2-workload/s3tests.yaml 3-upgrade-sequence/upgrade-osd-mon-mds.yaml 4-final/{monthrash.yaml osdthrash.yaml testrgw.yaml} distros/ubuntu_14.04.yaml} | 2 | |
Failure Reason:
Command failed (s3 tests against rgw) on vpm197 with status 1: "S3TEST_CONF=/home/ubuntu/cephtest/archive/s3-tests.client.0.conf BOTO_CONFIG=/home/ubuntu/cephtest/boto.cfg /home/ubuntu/cephtest/s3-tests/virtualenv/bin/nosetests -w /home/ubuntu/cephtest/s3-tests -v -a '!fails_on_rgw'" |
||||||||||||||
fail | 947808 | 2015-06-24 14:54:25 | 2015-06-24 14:55:42 | 2015-06-24 15:21:42 | 0:26:00 | 0:11:23 | 0:14:37 | vps | master | ubuntu | 12.04 | upgrade:hammer/older/{0-cluster/start.yaml 1-install/v0.94.yaml 2-workload/testrados.yaml 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-final/{monthrash.yaml osdthrash.yaml testrados.yaml} distros/ubuntu_12.04.yaml} | 2 | |
Failure Reason:
Command failed on vpm153 with status 1: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --max-ops 2000 --objects 50 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op snap_remove 50 --op snap_create 50 --op rollback 50 --op read 100 --op write 50 --op write_excl 50 --op delete 50 --pool unique_pool_0' |
||||||||||||||
fail | 947809 | 2015-06-24 14:54:26 | 2015-06-24 14:54:42 | 2015-06-24 16:22:47 | 1:28:05 | 1:14:17 | 0:13:48 | vps | master | ubuntu | 14.04 | upgrade:hammer/older/{0-cluster/start.yaml 1-install/latest_giant_release.yaml 2-workload/blogbench.yaml 3-upgrade-sequence/upgrade-osd-mon-mds.yaml 4-final/{monthrash.yaml osdthrash.yaml testrados.yaml} distros/ubuntu_14.04.yaml} | 2 | |
Failure Reason:
Command failed on vpm012 with status 1: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --max-ops 2000 --objects 50 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op snap_remove 50 --op snap_create 50 --op rollback 50 --op read 100 --op write 50 --op write_excl 50 --op delete 50 --pool unique_pool_0' |
||||||||||||||
fail | 947810 | 2015-06-24 14:54:28 | 2015-06-24 14:54:38 | 2015-06-24 16:20:43 | 1:26:05 | 1:12:10 | 0:13:55 | vps | master | ubuntu | 12.04 | upgrade:hammer/older/{0-cluster/start.yaml 1-install/latest_giant_release.yaml 2-workload/blogbench.yaml 3-upgrade-sequence/upgrade-osd-mon-mds.yaml 4-final/{monthrash.yaml osdthrash.yaml testrados.yaml} distros/ubuntu_12.04.yaml} | 2 | |
Failure Reason:
Command failed on vpm005 with status 1: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --max-ops 2000 --objects 50 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op snap_remove 50 --op snap_create 50 --op rollback 50 --op read 100 --op write 50 --op write_excl 50 --op delete 50 --pool unique_pool_0' |
||||||||||||||
fail | 947811 | 2015-06-24 14:54:30 | 2015-06-24 14:57:28 | 2015-06-24 15:29:30 | 0:32:02 | 0:19:32 | 0:12:30 | vps | master | ubuntu | 14.04 | upgrade:hammer/older/{0-cluster/start.yaml 1-install/v0.94.1.yaml 2-workload/rbd.yaml 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-final/{monthrash.yaml osdthrash.yaml testrados.yaml} distros/ubuntu_14.04.yaml} | 2 | |
Failure Reason:
Command failed on vpm185 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install librbd1-dbg=0.94.2-12-g78d894a-1trusty ceph=0.94.2-12-g78d894a-1trusty ceph-test=0.94.2-12-g78d894a-1trusty ceph-dbg=0.94.2-12-g78d894a-1trusty rbd-fuse=0.94.2-12-g78d894a-1trusty librados2-dbg=0.94.2-12-g78d894a-1trusty ceph-fuse-dbg=0.94.2-12-g78d894a-1trusty libcephfs-jni=0.94.2-12-g78d894a-1trusty libcephfs1-dbg=0.94.2-12-g78d894a-1trusty radosgw=0.94.2-12-g78d894a-1trusty librados2=0.94.2-12-g78d894a-1trusty libcephfs1=0.94.2-12-g78d894a-1trusty ceph-mds=0.94.2-12-g78d894a-1trusty radosgw-dbg=0.94.2-12-g78d894a-1trusty librbd1=0.94.2-12-g78d894a-1trusty python-ceph=0.94.2-12-g78d894a-1trusty ceph-test-dbg=0.94.2-12-g78d894a-1trusty ceph-fuse=0.94.2-12-g78d894a-1trusty ceph-common=0.94.2-12-g78d894a-1trusty libcephfs-java=0.94.2-12-g78d894a-1trusty ceph-common-dbg=0.94.2-12-g78d894a-1trusty ceph-mds-dbg=0.94.2-12-g78d894a-1trusty' |
||||||||||||||
fail | 947812 | 2015-06-24 14:54:31 | 2015-06-24 14:55:30 | 2015-06-24 16:22:52 | 1:27:22 | 1:15:19 | 0:12:03 | vps | master | ubuntu | 12.04 | upgrade:hammer/newer/{0-cluster/start.yaml 1-install/v0.94.2.yaml 2-workload/blogbench.yaml 3-upgrade-sequence/upgrade-osd-mon-mds.yaml 4-final/{monthrash.yaml osdthrash.yaml testrgw.yaml} distros/ubuntu_12.04.yaml} | 2 | |
Failure Reason:
Command failed (s3 tests against rgw) on vpm008 with status 1: "S3TEST_CONF=/home/ubuntu/cephtest/archive/s3-tests.client.1.conf BOTO_CONFIG=/home/ubuntu/cephtest/boto.cfg /home/ubuntu/cephtest/s3-tests/virtualenv/bin/nosetests -w /home/ubuntu/cephtest/s3-tests -v -a '!fails_on_rgw'" |
||||||||||||||
fail | 947813 | 2015-06-24 14:54:33 | 2015-06-24 14:57:41 | 2015-06-24 16:21:47 | 1:24:06 | 1:12:20 | 0:11:46 | vps | master | ubuntu | 14.04 | upgrade:hammer/newer/{0-cluster/start.yaml 1-install/v0.94.2.yaml 2-workload/rbd.yaml 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-final/{monthrash.yaml osdthrash.yaml testrgw.yaml} distros/ubuntu_14.04.yaml} | 2 | |
Failure Reason:
Command failed on vpm182 with status 128: 'git clone -b wip-11570 git://git.ceph.com/s3-tests.git /home/ubuntu/cephtest/s3-tests' |
||||||||||||||
fail | 947814 | 2015-06-24 14:54:34 | 2015-06-24 14:57:08 | 2015-06-24 15:33:09 | 0:36:01 | 0:21:19 | 0:14:42 | vps | master | ubuntu | 12.04 | upgrade:hammer/older/{0-cluster/start.yaml 1-install/v0.94.1.yaml 2-workload/rbd.yaml 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-final/{monthrash.yaml osdthrash.yaml testrados.yaml} distros/ubuntu_12.04.yaml} | 2 | |
Failure Reason:
Command failed on vpm054 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install librbd1-dbg=0.94.2-12-g78d894a-1precise ceph=0.94.2-12-g78d894a-1precise ceph-test=0.94.2-12-g78d894a-1precise ceph-dbg=0.94.2-12-g78d894a-1precise rbd-fuse=0.94.2-12-g78d894a-1precise librados2-dbg=0.94.2-12-g78d894a-1precise ceph-fuse-dbg=0.94.2-12-g78d894a-1precise libcephfs-jni=0.94.2-12-g78d894a-1precise libcephfs1-dbg=0.94.2-12-g78d894a-1precise radosgw=0.94.2-12-g78d894a-1precise librados2=0.94.2-12-g78d894a-1precise libcephfs1=0.94.2-12-g78d894a-1precise ceph-mds=0.94.2-12-g78d894a-1precise radosgw-dbg=0.94.2-12-g78d894a-1precise librbd1=0.94.2-12-g78d894a-1precise python-ceph=0.94.2-12-g78d894a-1precise ceph-test-dbg=0.94.2-12-g78d894a-1precise ceph-fuse=0.94.2-12-g78d894a-1precise ceph-common=0.94.2-12-g78d894a-1precise libcephfs-java=0.94.2-12-g78d894a-1precise ceph-common-dbg=0.94.2-12-g78d894a-1precise ceph-mds-dbg=0.94.2-12-g78d894a-1precise' |
||||||||||||||
fail | 947815 | 2015-06-24 14:54:35 | 2015-06-24 14:56:39 | 2015-06-24 15:28:40 | 0:32:01 | 0:16:26 | 0:15:35 | vps | master | ubuntu | 14.04 | upgrade:hammer/older/{0-cluster/start.yaml 1-install/v0.94.yaml 2-workload/testrados.yaml 3-upgrade-sequence/upgrade-osd-mon-mds.yaml 4-final/{monthrash.yaml osdthrash.yaml testrados.yaml} distros/ubuntu_14.04.yaml} | 2 | |
Failure Reason:
Command failed on vpm127 with status 1: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --max-ops 2000 --objects 50 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op snap_remove 50 --op snap_create 50 --op rollback 50 --op read 100 --op write 50 --op write_excl 50 --op delete 50 --pool unique_pool_0' |
||||||||||||||
pass | 947816 | 2015-06-24 14:54:37 | 2015-06-24 14:56:43 | 2015-06-24 16:38:50 | 1:42:07 | 1:26:04 | 0:16:03 | vps | master | ubuntu | 12.04 | upgrade:hammer/point-to-point/{point-to-point.yaml distros/ubuntu_12.04.yaml} | 3 | |
fail | 947817 | 2015-06-24 14:54:38 | 2015-06-24 14:57:11 | 2015-06-24 15:17:12 | 0:20:01 | 0:08:51 | 0:11:10 | vps | master | ubuntu | 12.04 | upgrade:hammer/older/{0-cluster/start.yaml 1-install/latest_giant_release.yaml 2-workload/testrados.yaml 3-upgrade-sequence/upgrade-osd-mon-mds.yaml 4-final/{monthrash.yaml osdthrash.yaml testrados.yaml} distros/ubuntu_12.04.yaml} | 2 | |
Failure Reason:
Command failed on vpm141 with status 1: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --max-ops 2000 --objects 50 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op snap_remove 50 --op snap_create 50 --op rollback 50 --op read 100 --op write 50 --op write_excl 50 --op delete 50 --pool unique_pool_0' |
||||||||||||||
fail | 947818 | 2015-06-24 14:54:40 | 2015-06-24 14:55:30 | 2015-06-24 15:48:53 | 0:53:23 | 0:36:36 | 0:16:47 | vps | master | ubuntu | 14.04 | upgrade:hammer/older/{0-cluster/start.yaml 1-install/v0.94.1.yaml 2-workload/blogbench.yaml 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-final/{monthrash.yaml osdthrash.yaml testrados.yaml} distros/ubuntu_14.04.yaml} | 2 | |
Failure Reason:
Command failed on vpm013 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install librbd1-dbg=0.94.2-12-g78d894a-1trusty ceph=0.94.2-12-g78d894a-1trusty ceph-test=0.94.2-12-g78d894a-1trusty ceph-dbg=0.94.2-12-g78d894a-1trusty rbd-fuse=0.94.2-12-g78d894a-1trusty librados2-dbg=0.94.2-12-g78d894a-1trusty ceph-fuse-dbg=0.94.2-12-g78d894a-1trusty libcephfs-jni=0.94.2-12-g78d894a-1trusty libcephfs1-dbg=0.94.2-12-g78d894a-1trusty radosgw=0.94.2-12-g78d894a-1trusty librados2=0.94.2-12-g78d894a-1trusty libcephfs1=0.94.2-12-g78d894a-1trusty ceph-mds=0.94.2-12-g78d894a-1trusty radosgw-dbg=0.94.2-12-g78d894a-1trusty librbd1=0.94.2-12-g78d894a-1trusty python-ceph=0.94.2-12-g78d894a-1trusty ceph-test-dbg=0.94.2-12-g78d894a-1trusty ceph-fuse=0.94.2-12-g78d894a-1trusty ceph-common=0.94.2-12-g78d894a-1trusty libcephfs-java=0.94.2-12-g78d894a-1trusty ceph-common-dbg=0.94.2-12-g78d894a-1trusty ceph-mds-dbg=0.94.2-12-g78d894a-1trusty' |
||||||||||||||
fail | 947819 | 2015-06-24 14:54:41 | 2015-06-24 14:57:15 | 2015-06-24 15:29:16 | 0:32:01 | 0:17:31 | 0:14:30 | vps | master | ubuntu | 12.04 | upgrade:hammer/newer/{0-cluster/start.yaml 1-install/v0.94.2.yaml 2-workload/testrados.yaml 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-final/{monthrash.yaml osdthrash.yaml testrgw.yaml} distros/ubuntu_12.04.yaml} | 2 | |
Failure Reason:
Command failed on vpm154 with status 1: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --max-ops 2000 --objects 50 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op snap_remove 50 --op snap_create 50 --op rollback 50 --op read 100 --op write 50 --op write_excl 50 --op delete 50 --pool unique_pool_0' |
||||||||||||||
fail | 947820 | 2015-06-24 14:54:43 | 2015-06-24 14:57:04 | 2015-06-24 16:27:10 | 1:30:06 | 1:15:23 | 0:14:43 | vps | master | ubuntu | 14.04 | upgrade:hammer/newer/{0-cluster/start.yaml 1-install/v0.94.2.yaml 2-workload/blogbench.yaml 3-upgrade-sequence/upgrade-osd-mon-mds.yaml 4-final/{monthrash.yaml osdthrash.yaml testrgw.yaml} distros/ubuntu_14.04.yaml} | 2 | |
Failure Reason:
Command failed on vpm105 with status 128: 'git clone -b wip-11570 git://git.ceph.com/s3-tests.git /home/ubuntu/cephtest/s3-tests' |
||||||||||||||
fail | 947821 | 2015-06-24 14:54:44 | 2015-06-24 14:56:50 | 2015-06-24 15:32:51 | 0:36:01 | 0:21:26 | 0:14:35 | vps | master | ubuntu | 12.04 | upgrade:hammer/older/{0-cluster/start.yaml 1-install/v0.94.1.yaml 2-workload/blogbench.yaml 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-final/{monthrash.yaml osdthrash.yaml testrados.yaml} distros/ubuntu_12.04.yaml} | 2 | |
Failure Reason:
Command failed on vpm064 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install librbd1-dbg=0.94.2-12-g78d894a-1precise ceph=0.94.2-12-g78d894a-1precise ceph-test=0.94.2-12-g78d894a-1precise ceph-dbg=0.94.2-12-g78d894a-1precise rbd-fuse=0.94.2-12-g78d894a-1precise librados2-dbg=0.94.2-12-g78d894a-1precise ceph-fuse-dbg=0.94.2-12-g78d894a-1precise libcephfs-jni=0.94.2-12-g78d894a-1precise libcephfs1-dbg=0.94.2-12-g78d894a-1precise radosgw=0.94.2-12-g78d894a-1precise librados2=0.94.2-12-g78d894a-1precise libcephfs1=0.94.2-12-g78d894a-1precise ceph-mds=0.94.2-12-g78d894a-1precise radosgw-dbg=0.94.2-12-g78d894a-1precise librbd1=0.94.2-12-g78d894a-1precise python-ceph=0.94.2-12-g78d894a-1precise ceph-test-dbg=0.94.2-12-g78d894a-1precise ceph-fuse=0.94.2-12-g78d894a-1precise ceph-common=0.94.2-12-g78d894a-1precise libcephfs-java=0.94.2-12-g78d894a-1precise ceph-common-dbg=0.94.2-12-g78d894a-1precise ceph-mds-dbg=0.94.2-12-g78d894a-1precise' |
||||||||||||||
fail | 947822 | 2015-06-24 14:54:46 | 2015-06-24 14:56:28 | 2015-06-24 16:20:33 | 1:24:05 | 1:12:17 | 0:11:48 | vps | master | ubuntu | 14.04 | upgrade:hammer/older/{0-cluster/start.yaml 1-install/v0.94.yaml 2-workload/rbd.yaml 3-upgrade-sequence/upgrade-osd-mon-mds.yaml 4-final/{monthrash.yaml osdthrash.yaml testrados.yaml} distros/ubuntu_14.04.yaml} | 2 | |
Failure Reason:
Command failed on vpm198 with status 1: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --max-ops 2000 --objects 50 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op snap_remove 50 --op snap_create 50 --op rollback 50 --op read 100 --op write 50 --op write_excl 50 --op delete 50 --pool unique_pool_0' |
||||||||||||||
fail | 947823 | 2015-06-24 14:54:48 | 2015-06-24 14:55:30 | 2015-06-24 16:21:13 | 1:25:43 | 1:14:40 | 0:11:03 | vps | master | ubuntu | 12.04 | upgrade:hammer/older/{0-cluster/start.yaml 1-install/v0.94.yaml 2-workload/rbd.yaml 3-upgrade-sequence/upgrade-osd-mon-mds.yaml 4-final/{monthrash.yaml osdthrash.yaml testrados.yaml} distros/ubuntu_12.04.yaml} | 2 | |
Failure Reason:
Command failed on vpm110 with status 1: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --max-ops 2000 --objects 50 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op snap_remove 50 --op snap_create 50 --op rollback 50 --op read 100 --op write 50 --op write_excl 50 --op delete 50 --pool unique_pool_0' |
||||||||||||||
pass | 947824 | 2015-06-24 14:54:50 | 2015-06-24 14:57:01 | 2015-06-24 16:49:08 | 1:52:07 | 1:37:01 | 0:15:06 | vps | master | ubuntu | 14.04 | upgrade:hammer/point-to-point/{point-to-point.yaml distros/ubuntu_14.04.yaml} | 3 | |
fail | 947825 | 2015-06-24 14:54:51 | 2015-06-24 14:55:30 | 2015-06-24 15:21:04 | 0:25:34 | 0:10:42 | 0:14:52 | vps | master | ubuntu | 14.04 | upgrade:hammer/older/{0-cluster/start.yaml 1-install/latest_giant_release.yaml 2-workload/testrados.yaml 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-final/{monthrash.yaml osdthrash.yaml testrados.yaml} distros/ubuntu_14.04.yaml} | 2 | |
Failure Reason:
Command failed on vpm155 with status 1: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --max-ops 2000 --objects 50 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op snap_remove 50 --op snap_create 50 --op rollback 50 --op read 100 --op write 50 --op write_excl 50 --op delete 50 --pool unique_pool_0' |
||||||||||||||
fail | 947826 | 2015-06-24 14:54:52 | 2015-06-24 14:56:20 | 2015-06-24 15:36:22 | 0:40:02 | 0:26:00 | 0:14:02 | vps | master | ubuntu | 12.04 | upgrade:hammer/newer/{0-cluster/start.yaml 1-install/v0.94.2.yaml 2-workload/s3tests.yaml 3-upgrade-sequence/upgrade-osd-mon-mds.yaml 4-final/{monthrash.yaml osdthrash.yaml testrgw.yaml} distros/ubuntu_12.04.yaml} | 2 | |
Failure Reason:
Command failed (s3 tests against rgw) on vpm103 with status 1: "S3TEST_CONF=/home/ubuntu/cephtest/archive/s3-tests.client.0.conf BOTO_CONFIG=/home/ubuntu/cephtest/boto.cfg /home/ubuntu/cephtest/s3-tests/virtualenv/bin/nosetests -w /home/ubuntu/cephtest/s3-tests -v -a '!fails_on_rgw'" |
||||||||||||||
fail | 947827 | 2015-06-24 14:54:54 | 2015-06-24 14:56:57 | 2015-06-24 15:30:59 | 0:34:02 | 0:18:07 | 0:15:55 | vps | master | ubuntu | 14.04 | upgrade:hammer/newer/{0-cluster/start.yaml 1-install/v0.94.2.yaml 2-workload/testrados.yaml 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-final/{monthrash.yaml osdthrash.yaml testrgw.yaml} distros/ubuntu_14.04.yaml} | 2 | |
Failure Reason:
Command failed on vpm082 with status 1: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --max-ops 2000 --objects 50 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op snap_remove 50 --op snap_create 50 --op rollback 50 --op read 100 --op write 50 --op write_excl 50 --op delete 50 --pool unique_pool_0' |
||||||||||||||
fail | 947828 | 2015-06-24 14:54:56 | 2015-06-24 14:57:22 | 2015-06-24 15:29:23 | 0:32:01 | 0:17:31 | 0:14:30 | vps | master | ubuntu | 12.04 | upgrade:hammer/older/{0-cluster/start.yaml 1-install/v0.94.1.yaml 2-workload/testrados.yaml 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-final/{monthrash.yaml osdthrash.yaml testrados.yaml} distros/ubuntu_12.04.yaml} | 2 | |
Failure Reason:
Command failed on vpm098 with status 1: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --max-ops 2000 --objects 50 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op snap_remove 50 --op snap_create 50 --op rollback 50 --op read 100 --op write 50 --op write_excl 50 --op delete 50 --pool unique_pool_0' |
||||||||||||||
fail | 947829 | 2015-06-24 14:54:57 | 2015-06-24 14:57:25 | 2015-06-24 16:27:32 | 1:30:07 | 1:14:46 | 0:15:21 | vps | master | ubuntu | 14.04 | upgrade:hammer/older/{0-cluster/start.yaml 1-install/v0.94.yaml 2-workload/blogbench.yaml 3-upgrade-sequence/upgrade-osd-mon-mds.yaml 4-final/{monthrash.yaml osdthrash.yaml testrados.yaml} distros/ubuntu_14.04.yaml} | 2 | |
Failure Reason:
Command failed on vpm097 with status 1: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --max-ops 2000 --objects 50 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op snap_remove 50 --op snap_create 50 --op rollback 50 --op read 100 --op write 50 --op write_excl 50 --op delete 50 --pool unique_pool_0' |
||||||||||||||
fail | 947830 | 2015-06-24 14:54:58 | 2015-06-24 14:55:30 | 2015-06-24 16:27:17 | 1:31:47 | 1:16:09 | 0:15:38 | vps | master | ubuntu | 12.04 | upgrade:hammer/older/{0-cluster/start.yaml 1-install/v0.94.yaml 2-workload/blogbench.yaml 3-upgrade-sequence/upgrade-osd-mon-mds.yaml 4-final/{monthrash.yaml osdthrash.yaml testrados.yaml} distros/ubuntu_12.04.yaml} | 2 | |
Failure Reason:
Command failed on vpm053 with status 1: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --max-ops 2000 --objects 50 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op snap_remove 50 --op snap_create 50 --op rollback 50 --op read 100 --op write 50 --op write_excl 50 --op delete 50 --pool unique_pool_0' |
||||||||||||||
fail | 947831 | 2015-06-24 14:55:00 | 2015-06-24 14:55:30 | 2015-06-24 16:21:20 | 1:25:50 | 1:13:39 | 0:12:11 | vps | master | ubuntu | 14.04 | upgrade:hammer/older/{0-cluster/start.yaml 1-install/latest_giant_release.yaml 2-workload/rbd.yaml 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-final/{monthrash.yaml osdthrash.yaml testrados.yaml} distros/ubuntu_14.04.yaml} | 2 | |
Failure Reason:
Command failed on vpm051 with status 1: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --max-ops 2000 --objects 50 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op snap_remove 50 --op snap_create 50 --op rollback 50 --op read 100 --op write 50 --op write_excl 50 --op delete 50 --pool unique_pool_0' |
||||||||||||||
fail | 947832 | 2015-06-24 14:55:01 | 2015-06-24 14:57:32 | 2015-06-24 16:21:37 | 1:24:05 | 1:09:17 | 0:14:48 | vps | master | ubuntu | 12.04 | upgrade:hammer/older/{0-cluster/start.yaml 1-install/latest_giant_release.yaml 2-workload/rbd.yaml 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-final/{monthrash.yaml osdthrash.yaml testrados.yaml} distros/ubuntu_12.04.yaml} | 2 | |
Failure Reason:
Command failed on vpm151 with status 1: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --max-ops 2000 --objects 50 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op snap_remove 50 --op snap_create 50 --op rollback 50 --op read 100 --op write 50 --op write_excl 50 --op delete 50 --pool unique_pool_0' |
||||||||||||||
fail | 947833 | 2015-06-24 14:55:03 | 2015-06-24 14:55:30 | 2015-06-24 15:27:22 | 0:31:52 | 0:17:40 | 0:14:12 | vps | master | ubuntu | 14.04 | upgrade:hammer/older/{0-cluster/start.yaml 1-install/v0.94.1.yaml 2-workload/testrados.yaml 3-upgrade-sequence/upgrade-osd-mon-mds.yaml 4-final/{monthrash.yaml osdthrash.yaml testrados.yaml} distros/ubuntu_14.04.yaml} | 2 | |
Failure Reason:
Command failed on vpm050 with status 1: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --max-ops 2000 --objects 50 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op snap_remove 50 --op snap_create 50 --op rollback 50 --op read 100 --op write 50 --op write_excl 50 --op delete 50 --pool unique_pool_0' |