User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Fail |
---|---|---|---|---|---|---|---|---|---|
teuthology | 2015-06-24 22:00:44 | 2015-06-24 22:01:21 | 2015-06-24 23:21:32 | 1:20:11 | upgrade:hammer | hammer | vps | — | 10 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
fail | 947850 | 2015-06-24 22:01:10 | 2015-06-24 22:01:17 | 2015-06-24 22:19:17 | 0:18:00 | 0:06:52 | 0:11:08 | vps | master | ubuntu | 12.04 | upgrade:hammer/older/{0-cluster/start.yaml 1-install/v0.94.yaml 2-workload/testrados.yaml 3-upgrade-sequence/upgrade-osd-mon-mds.yaml 4-final/{monthrash.yaml osdthrash.yaml testrados.yaml} distros/ubuntu_12.04.yaml} | 2 | |
Failure Reason:
Command failed on vpm015 with status 1: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --max-ops 2000 --objects 50 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op snap_remove 50 --op snap_create 50 --op rollback 50 --op read 100 --op write 50 --op write_excl 50 --op delete 50 --pool unique_pool_0' |
||||||||||||||
fail | 947851 | 2015-06-24 22:01:11 | 2015-06-24 22:01:21 | 2015-06-24 23:13:25 | 1:12:04 | 1:00:56 | 0:11:08 | vps | master | ubuntu | 14.04 | upgrade:hammer/older/{0-cluster/start.yaml 1-install/latest_giant_release.yaml 2-workload/blogbench.yaml 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-final/{monthrash.yaml osdthrash.yaml testrados.yaml} distros/ubuntu_14.04.yaml} | 2 | |
Failure Reason:
Command failed on vpm159 with status 1: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --max-ops 2000 --objects 50 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op snap_remove 50 --op snap_create 50 --op rollback 50 --op read 100 --op write 50 --op write_excl 50 --op delete 50 --pool unique_pool_0' |
||||||||||||||
fail | 947852 | 2015-06-24 22:01:12 | 2015-06-24 22:01:18 | 2015-06-24 23:17:22 | 1:16:04 | 1:04:27 | 0:11:37 | vps | master | ubuntu | 12.04 | upgrade:hammer/newer/{0-cluster/start.yaml 1-install/v0.94.2.yaml 2-workload/rbd.yaml 3-upgrade-sequence/upgrade-osd-mon-mds.yaml 4-final/{monthrash.yaml osdthrash.yaml testrgw.yaml} distros/ubuntu_12.04.yaml} | 2 | |
Failure Reason:
Command failed (s3 tests against rgw) on vpm062 with status 1: "S3TEST_CONF=/home/ubuntu/cephtest/archive/s3-tests.client.1.conf BOTO_CONFIG=/home/ubuntu/cephtest/boto.cfg /home/ubuntu/cephtest/s3-tests/virtualenv/bin/nosetests -w /home/ubuntu/cephtest/s3-tests -v -a '!fails_on_rgw'" |
||||||||||||||
fail | 947853 | 2015-06-24 22:01:14 | 2015-06-24 22:02:26 | 2015-06-24 22:33:25 | 0:30:59 | 0:18:40 | 0:12:19 | vps | master | ubuntu | 14.04 | upgrade:hammer/newer/{0-cluster/start.yaml 1-install/v0.94.2.yaml 2-workload/s3tests.yaml 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-final/{monthrash.yaml osdthrash.yaml testrgw.yaml} distros/ubuntu_14.04.yaml} | 2 | |
Failure Reason:
Command failed (s3 tests against rgw) on vpm099 with status 1: "S3TEST_CONF=/home/ubuntu/cephtest/archive/s3-tests.client.0.conf BOTO_CONFIG=/home/ubuntu/cephtest/boto.cfg /home/ubuntu/cephtest/s3-tests/virtualenv/bin/nosetests -w /home/ubuntu/cephtest/s3-tests -v -a '!fails_on_rgw'" |
||||||||||||||
fail | 947854 | 2015-06-24 22:01:15 | 2015-06-24 22:02:26 | 2015-06-24 23:21:32 | 1:19:06 | 1:05:54 | 0:13:12 | vps | master | ubuntu | 12.04 | upgrade:hammer/older/{0-cluster/start.yaml 1-install/latest_giant_release.yaml 2-workload/blogbench.yaml 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-final/{monthrash.yaml osdthrash.yaml testrados.yaml} distros/ubuntu_12.04.yaml} | 2 | |
Failure Reason:
Command failed on vpm120 with status 1: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --max-ops 2000 --objects 50 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op snap_remove 50 --op snap_create 50 --op rollback 50 --op read 100 --op write 50 --op write_excl 50 --op delete 50 --pool unique_pool_0' |
||||||||||||||
fail | 947855 | 2015-06-24 22:01:16 | 2015-06-24 22:02:27 | 2015-06-24 22:23:36 | 0:21:09 | 0:10:48 | 0:10:21 | vps | master | ubuntu | 14.04 | upgrade:hammer/older/{0-cluster/start.yaml 1-install/v0.94.1.yaml 2-workload/rbd.yaml 3-upgrade-sequence/upgrade-osd-mon-mds.yaml 4-final/{monthrash.yaml osdthrash.yaml testrados.yaml} distros/ubuntu_14.04.yaml} | 2 | |
Failure Reason:
Command failed on vpm068 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install librbd1-dbg=0.94.2-12-g78d894a-1trusty ceph=0.94.2-12-g78d894a-1trusty ceph-test=0.94.2-12-g78d894a-1trusty ceph-dbg=0.94.2-12-g78d894a-1trusty rbd-fuse=0.94.2-12-g78d894a-1trusty librados2-dbg=0.94.2-12-g78d894a-1trusty ceph-fuse-dbg=0.94.2-12-g78d894a-1trusty libcephfs-jni=0.94.2-12-g78d894a-1trusty libcephfs1-dbg=0.94.2-12-g78d894a-1trusty radosgw=0.94.2-12-g78d894a-1trusty librados2=0.94.2-12-g78d894a-1trusty libcephfs1=0.94.2-12-g78d894a-1trusty ceph-mds=0.94.2-12-g78d894a-1trusty radosgw-dbg=0.94.2-12-g78d894a-1trusty librbd1=0.94.2-12-g78d894a-1trusty python-ceph=0.94.2-12-g78d894a-1trusty ceph-test-dbg=0.94.2-12-g78d894a-1trusty ceph-fuse=0.94.2-12-g78d894a-1trusty ceph-common=0.94.2-12-g78d894a-1trusty libcephfs-java=0.94.2-12-g78d894a-1trusty ceph-common-dbg=0.94.2-12-g78d894a-1trusty ceph-mds-dbg=0.94.2-12-g78d894a-1trusty' |
||||||||||||||
fail | 947856 | 2015-06-24 22:01:18 | 2015-06-24 22:02:27 | 2015-06-24 22:23:30 | 0:21:03 | 0:10:24 | 0:10:39 | vps | master | ubuntu | 12.04 | upgrade:hammer/older/{0-cluster/start.yaml 1-install/v0.94.1.yaml 2-workload/rbd.yaml 3-upgrade-sequence/upgrade-osd-mon-mds.yaml 4-final/{monthrash.yaml osdthrash.yaml testrados.yaml} distros/ubuntu_12.04.yaml} | 2 | |
Failure Reason:
Command failed on vpm041 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install librbd1-dbg=0.94.2-12-g78d894a-1precise ceph=0.94.2-12-g78d894a-1precise ceph-test=0.94.2-12-g78d894a-1precise ceph-dbg=0.94.2-12-g78d894a-1precise rbd-fuse=0.94.2-12-g78d894a-1precise librados2-dbg=0.94.2-12-g78d894a-1precise ceph-fuse-dbg=0.94.2-12-g78d894a-1precise libcephfs-jni=0.94.2-12-g78d894a-1precise libcephfs1-dbg=0.94.2-12-g78d894a-1precise radosgw=0.94.2-12-g78d894a-1precise librados2=0.94.2-12-g78d894a-1precise libcephfs1=0.94.2-12-g78d894a-1precise ceph-mds=0.94.2-12-g78d894a-1precise radosgw-dbg=0.94.2-12-g78d894a-1precise librbd1=0.94.2-12-g78d894a-1precise python-ceph=0.94.2-12-g78d894a-1precise ceph-test-dbg=0.94.2-12-g78d894a-1precise ceph-fuse=0.94.2-12-g78d894a-1precise ceph-common=0.94.2-12-g78d894a-1precise libcephfs-java=0.94.2-12-g78d894a-1precise ceph-common-dbg=0.94.2-12-g78d894a-1precise ceph-mds-dbg=0.94.2-12-g78d894a-1precise' |
||||||||||||||
fail | 947857 | 2015-06-24 22:01:19 | 2015-06-24 22:02:27 | 2015-06-24 22:21:42 | 0:19:15 | 0:08:26 | 0:10:49 | vps | master | ubuntu | 14.04 | upgrade:hammer/older/{0-cluster/start.yaml 1-install/v0.94.yaml 2-workload/testrados.yaml 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-final/{monthrash.yaml osdthrash.yaml testrados.yaml} distros/ubuntu_14.04.yaml} | 2 | |
Failure Reason:
Command failed on vpm101 with status 1: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --max-ops 2000 --objects 50 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op snap_remove 50 --op snap_create 50 --op rollback 50 --op read 100 --op write 50 --op write_excl 50 --op delete 50 --pool unique_pool_0' |
||||||||||||||
fail | 947858 | 2015-06-24 22:01:20 | 2015-06-24 22:02:27 | 2015-06-24 23:19:38 | 1:17:11 | 1:06:03 | 0:11:08 | vps | master | ubuntu | 12.04 | upgrade:hammer/newer/{0-cluster/start.yaml 1-install/v0.94.2.yaml 2-workload/blogbench.yaml 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-final/{monthrash.yaml osdthrash.yaml testrgw.yaml} distros/ubuntu_12.04.yaml} | 2 | |
Failure Reason:
Command failed (s3 tests against rgw) on vpm200 with status 1: "S3TEST_CONF=/home/ubuntu/cephtest/archive/s3-tests.client.1.conf BOTO_CONFIG=/home/ubuntu/cephtest/boto.cfg /home/ubuntu/cephtest/s3-tests/virtualenv/bin/nosetests -w /home/ubuntu/cephtest/s3-tests -v -a '!fails_on_rgw'" |
||||||||||||||
fail | 947859 | 2015-06-24 22:01:21 | 2015-06-24 22:02:27 | 2015-06-24 22:37:40 | 0:35:13 | 0:26:04 | 0:09:09 | vps | master | ubuntu | 14.04 | upgrade:hammer/newer/{0-cluster/start.yaml 1-install/v0.94.2.yaml 2-workload/rbd.yaml 3-upgrade-sequence/upgrade-osd-mon-mds.yaml 4-final/{monthrash.yaml osdthrash.yaml testrgw.yaml} distros/ubuntu_14.04.yaml} | 2 | |
Failure Reason:
Fuse mount failed to populate /sys/ after 31 seconds |