User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Pass | Fail |
---|---|---|---|---|---|---|---|---|---|---|
teuthology | 2015-06-29 23:27:29 | 2015-06-29 23:30:37 | 2015-06-30 02:07:53 | 2:37:16 | upgrade:hammer | hammer | vps | — | 12 | 15 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
fail | 954224 | 2015-06-29 23:27:39 | 2015-06-29 23:30:37 | 2015-06-29 23:46:37 | 0:16:00 | 0:06:02 | 0:09:58 | vps | master | ubuntu | 12.04 | upgrade:hammer/older/{0-cluster/start.yaml 1-install/v0.94.yaml 2-workload/testrados.yaml 3-upgrade-sequence/upgrade-osd-mon-mds.yaml 4-final/{monthrash.yaml osdthrash.yaml testrados.yaml} distros/ubuntu_12.04.yaml} | 2 | |
Failure Reason:
Command failed on vpm139 with status 1: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --max-ops 2000 --objects 50 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op snap_remove 50 --op snap_create 50 --op rollback 50 --op read 100 --op write 50 --op write_excl 50 --op delete 50 --pool unique_pool_0' |
||||||||||||||
fail | 954225 | 2015-06-29 23:27:40 | 2015-06-29 23:31:15 | 2015-06-30 00:11:17 | 0:40:02 | 0:26:57 | 0:13:05 | vps | master | ubuntu | 12.04 | upgrade:hammer/newer/{0-cluster/start.yaml 1-install/v0.94.2.yaml 2-workload/rbd.yaml 3-upgrade-sequence/upgrade-osd-mon-mds.yaml 4-final/{monthrash.yaml osdthrash.yaml testrgw.yaml} distros/ubuntu_12.04.yaml} | 2 | |
Failure Reason:
Fuse mount failed to populate /sys/ after 31 seconds |
||||||||||||||
pass | 954226 | 2015-06-29 23:27:41 | 2015-06-29 23:31:22 | 2015-06-30 01:07:36 | 1:36:14 | 1:27:32 | 0:08:42 | vps | master | ubuntu | 12.04 | upgrade:hammer/older/{0-cluster/start.yaml 1-install/latest_giant_release.yaml 2-workload/blogbench.yaml 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-final/{monthrash.yaml osdthrash.yaml testrados.yaml} distros/ubuntu_12.04.yaml} | 2 | |
pass | 954227 | 2015-06-29 23:27:42 | 2015-06-29 23:34:18 | 2015-06-30 01:18:36 | 1:44:18 | 1:33:47 | 0:10:31 | vps | master | ubuntu | 12.04 | upgrade:hammer/older/{0-cluster/start.yaml 1-install/v0.94.1.yaml 2-workload/rbd.yaml 3-upgrade-sequence/upgrade-osd-mon-mds.yaml 4-final/{monthrash.yaml osdthrash.yaml testrados.yaml} distros/ubuntu_12.04.yaml} | 2 | |
fail | 954228 | 2015-06-29 23:27:44 | 2015-06-29 23:35:29 | 2015-06-30 00:49:47 | 1:14:18 | 1:04:23 | 0:09:55 | vps | master | ubuntu | 12.04 | upgrade:hammer/newer/{0-cluster/start.yaml 1-install/v0.94.2.yaml 2-workload/blogbench.yaml 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-final/{monthrash.yaml osdthrash.yaml testrgw.yaml} distros/ubuntu_12.04.yaml} | 2 | |
Failure Reason:
Command failed (s3 tests against rgw) on vpm003 with status 1: "S3TEST_CONF=/home/ubuntu/cephtest/archive/s3-tests.client.1.conf BOTO_CONFIG=/home/ubuntu/cephtest/boto.cfg /home/ubuntu/cephtest/s3-tests/virtualenv/bin/nosetests -w /home/ubuntu/cephtest/s3-tests -v -a '!fails_on_rgw'" |
||||||||||||||
fail | 954229 | 2015-06-29 23:27:45 | 2015-06-29 23:38:53 | 2015-06-29 23:56:54 | 0:18:01 | 0:06:22 | 0:11:39 | vps | master | ubuntu | 12.04 | upgrade:hammer/older/{0-cluster/start.yaml 1-install/latest_giant_release.yaml 2-workload/testrados.yaml 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-final/{monthrash.yaml osdthrash.yaml testrados.yaml} distros/ubuntu_12.04.yaml} | 2 | |
Failure Reason:
Command failed on vpm073 with status 1: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --max-ops 2000 --objects 50 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op snap_remove 50 --op snap_create 50 --op rollback 50 --op read 100 --op write 50 --op write_excl 50 --op delete 50 --pool unique_pool_0' |
||||||||||||||
pass | 954230 | 2015-06-29 23:27:46 | 2015-06-29 23:43:16 | 2015-06-30 01:31:50 | 1:48:34 | 1:34:48 | 0:13:46 | vps | master | ubuntu | 12.04 | upgrade:hammer/older/{0-cluster/start.yaml 1-install/v0.94.1.yaml 2-workload/blogbench.yaml 3-upgrade-sequence/upgrade-osd-mon-mds.yaml 4-final/{monthrash.yaml osdthrash.yaml testrados.yaml} distros/ubuntu_12.04.yaml} | 2 | |
fail | 954231 | 2015-06-29 23:27:47 | 2015-06-29 23:46:42 | 2015-06-30 00:02:42 | 0:16:00 | 0:06:53 | 0:09:07 | vps | master | ubuntu | 12.04 | upgrade:hammer/newer/{0-cluster/start.yaml 1-install/v0.94.2.yaml 2-workload/testrados.yaml 3-upgrade-sequence/upgrade-osd-mon-mds.yaml 4-final/{monthrash.yaml osdthrash.yaml testrgw.yaml} distros/ubuntu_12.04.yaml} | 2 | |
Failure Reason:
Command failed on vpm139 with status 1: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --max-ops 2000 --objects 50 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op snap_remove 50 --op snap_create 50 --op rollback 50 --op read 100 --op write 50 --op write_excl 50 --op delete 50 --pool unique_pool_0' |
||||||||||||||
pass | 954232 | 2015-06-29 23:27:49 | 2015-06-29 23:58:09 | 2015-06-30 01:46:26 | 1:48:17 | 1:34:56 | 0:13:21 | vps | master | ubuntu | 12.04 | upgrade:hammer/older/{0-cluster/start.yaml 1-install/v0.94.yaml 2-workload/rbd.yaml 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-final/{monthrash.yaml osdthrash.yaml testrados.yaml} distros/ubuntu_12.04.yaml} | 2 | |
fail | 954233 | 2015-06-29 23:27:50 | 2015-06-29 23:53:25 | 2015-06-30 00:15:26 | 0:22:01 | 0:09:46 | 0:12:15 | vps | master | ubuntu | 12.04 | upgrade:hammer/older/{0-cluster/start.yaml 1-install/v0.94.1.yaml 2-workload/testrados.yaml 3-upgrade-sequence/upgrade-osd-mon-mds.yaml 4-final/{monthrash.yaml osdthrash.yaml testrados.yaml} distros/ubuntu_12.04.yaml} | 2 | |
Failure Reason:
Command failed on vpm014 with status 1: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --max-ops 2000 --objects 50 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op snap_remove 50 --op snap_create 50 --op rollback 50 --op read 100 --op write 50 --op write_excl 50 --op delete 50 --pool unique_pool_0' |
||||||||||||||
fail | 954234 | 2015-06-29 23:27:51 | 2015-06-29 23:54:53 | 2015-06-30 00:16:54 | 0:22:01 | 0:11:08 | 0:10:53 | vps | master | ubuntu | 12.04 | upgrade:hammer/newer/{0-cluster/start.yaml 1-install/v0.94.2.yaml 2-workload/s3tests.yaml 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-final/{monthrash.yaml osdthrash.yaml testrgw.yaml} distros/ubuntu_12.04.yaml} | 2 | |
Failure Reason:
Command failed (s3 tests against rgw) on vpm080 with status 1: "S3TEST_CONF=/home/ubuntu/cephtest/archive/s3-tests.client.0.conf BOTO_CONFIG=/home/ubuntu/cephtest/boto.cfg /home/ubuntu/cephtest/s3-tests/virtualenv/bin/nosetests -w /home/ubuntu/cephtest/s3-tests -v -a '!fails_on_rgw'" |
||||||||||||||
pass | 954235 | 2015-06-29 23:27:52 | 2015-06-29 23:53:04 | 2015-06-30 01:37:12 | 1:44:08 | 1:34:06 | 0:10:02 | vps | master | ubuntu | 12.04 | upgrade:hammer/older/{0-cluster/start.yaml 1-install/v0.94.yaml 2-workload/blogbench.yaml 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-final/{monthrash.yaml osdthrash.yaml testrados.yaml} distros/ubuntu_12.04.yaml} | 2 | |
pass | 954236 | 2015-06-29 23:27:53 | 2015-06-30 00:17:36 | 2015-06-30 02:07:53 | 1:50:17 | 1:32:46 | 0:17:31 | vps | master | ubuntu | 12.04 | upgrade:hammer/older/{0-cluster/start.yaml 1-install/latest_giant_release.yaml 2-workload/rbd.yaml 3-upgrade-sequence/upgrade-osd-mon-mds.yaml 4-final/{monthrash.yaml osdthrash.yaml testrados.yaml} distros/ubuntu_12.04.yaml} | 2 | |
fail | 954237 | 2015-06-29 23:27:55 | 2015-06-29 23:58:19 | 2015-06-30 01:20:37 | 1:22:18 | 1:09:18 | 0:13:00 | vps | master | ubuntu | 12.04 | upgrade:hammer/newer/{0-cluster/start.yaml 1-install/v0.94.2.yaml 2-workload/rbd.yaml 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-final/{monthrash.yaml osdthrash.yaml testrgw.yaml} distros/ubuntu_12.04.yaml} | 2 | |
Failure Reason:
Command failed (s3 tests against rgw) on vpm197 with status 1: "S3TEST_CONF=/home/ubuntu/cephtest/archive/s3-tests.client.1.conf BOTO_CONFIG=/home/ubuntu/cephtest/boto.cfg /home/ubuntu/cephtest/s3-tests/virtualenv/bin/nosetests -w /home/ubuntu/cephtest/s3-tests -v -a '!fails_on_rgw'" |
||||||||||||||
fail | 954238 | 2015-06-29 23:27:56 | 2015-06-30 00:17:08 | 2015-06-30 00:43:10 | 0:26:02 | 0:06:40 | 0:19:22 | vps | master | ubuntu | 12.04 | upgrade:hammer/older/{0-cluster/start.yaml 1-install/v0.94.yaml 2-workload/testrados.yaml 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-final/{monthrash.yaml osdthrash.yaml testrados.yaml} distros/ubuntu_12.04.yaml} | 2 | |
Failure Reason:
Command failed on vpm146 with status 1: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --max-ops 2000 --objects 50 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op snap_remove 50 --op snap_create 50 --op rollback 50 --op read 100 --op write 50 --op write_excl 50 --op delete 50 --pool unique_pool_0' |
||||||||||||||
pass | 954239 | 2015-06-29 23:27:57 | 2015-06-29 23:53:32 | 2015-06-30 01:41:51 | 1:48:19 | 1:34:27 | 0:13:52 | vps | master | ubuntu | 12.04 | upgrade:hammer/older/{0-cluster/start.yaml 1-install/latest_giant_release.yaml 2-workload/blogbench.yaml 3-upgrade-sequence/upgrade-osd-mon-mds.yaml 4-final/{monthrash.yaml osdthrash.yaml testrados.yaml} distros/ubuntu_12.04.yaml} | 2 | |
fail | 954240 | 2015-06-29 23:27:58 | 2015-06-29 23:53:53 | 2015-06-30 01:22:02 | 1:28:09 | 1:13:21 | 0:14:48 | vps | master | ubuntu | 12.04 | upgrade:hammer/newer/{0-cluster/start.yaml 1-install/v0.94.2.yaml 2-workload/blogbench.yaml 3-upgrade-sequence/upgrade-osd-mon-mds.yaml 4-final/{monthrash.yaml osdthrash.yaml testrgw.yaml} distros/ubuntu_12.04.yaml} | 2 | |
Failure Reason:
Command failed (s3 tests against rgw) on vpm058 with status 1: "S3TEST_CONF=/home/ubuntu/cephtest/archive/s3-tests.client.1.conf BOTO_CONFIG=/home/ubuntu/cephtest/boto.cfg /home/ubuntu/cephtest/s3-tests/virtualenv/bin/nosetests -w /home/ubuntu/cephtest/s3-tests -v -a '!fails_on_rgw'" |
||||||||||||||
pass | 954241 | 2015-06-29 23:28:00 | 2015-06-30 00:02:33 | 2015-06-30 01:50:50 | 1:48:17 | 1:29:51 | 0:18:26 | vps | master | ubuntu | 12.04 | upgrade:hammer/older/{0-cluster/start.yaml 1-install/v0.94.1.yaml 2-workload/rbd.yaml 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-final/{monthrash.yaml osdthrash.yaml testrados.yaml} distros/ubuntu_12.04.yaml} | 2 | |
fail | 954242 | 2015-06-29 23:28:01 | 2015-06-29 23:58:57 | 2015-06-30 01:05:04 | 1:06:07 | 0:51:29 | 0:14:38 | vps | master | ubuntu | 12.04 | upgrade:hammer/point-to-point/{point-to-point.yaml distros/ubuntu_12.04.yaml} | 3 | |
Failure Reason:
Command failed (workunit test rados/test.sh) on vpm172 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=59f37a9bafc095181b3f41ec5d93ac58e2cda604 TESTDIR="/home/ubuntu/cephtest" CEPH_ID="0" PATH=$PATH:/usr/sbin adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/workunit.client.0/rados/test.sh' |
||||||||||||||
fail | 954243 | 2015-06-29 23:28:02 | 2015-06-29 23:56:10 | 2015-06-30 00:18:15 | 0:22:05 | 0:08:23 | 0:13:42 | vps | master | ubuntu | 12.04 | upgrade:hammer/older/{0-cluster/start.yaml 1-install/latest_giant_release.yaml 2-workload/testrados.yaml 3-upgrade-sequence/upgrade-osd-mon-mds.yaml 4-final/{monthrash.yaml osdthrash.yaml testrados.yaml} distros/ubuntu_12.04.yaml} | 2 | |
Failure Reason:
Command failed on vpm073 with status 1: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --max-ops 2000 --objects 50 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op snap_remove 50 --op snap_create 50 --op rollback 50 --op read 100 --op write 50 --op write_excl 50 --op delete 50 --pool unique_pool_0' |
||||||||||||||
fail | 954244 | 2015-06-29 23:28:03 | 2015-06-29 23:54:39 | 2015-06-30 00:16:39 | 0:22:00 | 0:08:37 | 0:13:23 | vps | master | ubuntu | 12.04 | upgrade:hammer/newer/{0-cluster/start.yaml 1-install/v0.94.2.yaml 2-workload/testrados.yaml 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-final/{monthrash.yaml osdthrash.yaml testrgw.yaml} distros/ubuntu_12.04.yaml} | 2 | |
Failure Reason:
Command failed on vpm074 with status 1: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --max-ops 2000 --objects 50 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op snap_remove 50 --op snap_create 50 --op rollback 50 --op read 100 --op write 50 --op write_excl 50 --op delete 50 --pool unique_pool_0' |
||||||||||||||
pass | 954245 | 2015-06-29 23:28:04 | 2015-06-29 23:54:32 | 2015-06-30 01:44:51 | 1:50:19 | 1:38:24 | 0:11:55 | vps | master | ubuntu | 12.04 | upgrade:hammer/older/{0-cluster/start.yaml 1-install/v0.94.1.yaml 2-workload/blogbench.yaml 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-final/{monthrash.yaml osdthrash.yaml testrados.yaml} distros/ubuntu_12.04.yaml} | 2 | |
pass | 954246 | 2015-06-29 23:28:06 | 2015-06-29 23:55:27 | 2015-06-30 01:33:42 | 1:38:15 | 1:28:26 | 0:09:49 | vps | master | ubuntu | 12.04 | upgrade:hammer/older/{0-cluster/start.yaml 1-install/v0.94.yaml 2-workload/rbd.yaml 3-upgrade-sequence/upgrade-osd-mon-mds.yaml 4-final/{monthrash.yaml osdthrash.yaml testrados.yaml} distros/ubuntu_12.04.yaml} | 2 | |
fail | 954247 | 2015-06-29 23:28:07 | 2015-06-30 00:00:08 | 2015-06-30 00:38:20 | 0:38:12 | 0:08:51 | 0:29:21 | vps | master | ubuntu | 12.04 | upgrade:hammer/newer/{0-cluster/start.yaml 1-install/v0.94.2.yaml 2-workload/s3tests.yaml 3-upgrade-sequence/upgrade-osd-mon-mds.yaml 4-final/{monthrash.yaml osdthrash.yaml testrgw.yaml} distros/ubuntu_12.04.yaml} | 2 | |
Failure Reason:
Command failed (s3 tests against rgw) on vpm195 with status 1: "S3TEST_CONF=/home/ubuntu/cephtest/archive/s3-tests.client.0.conf BOTO_CONFIG=/home/ubuntu/cephtest/boto.cfg /home/ubuntu/cephtest/s3-tests/virtualenv/bin/nosetests -w /home/ubuntu/cephtest/s3-tests -v -a '!fails_on_rgw'" |
||||||||||||||
fail | 954248 | 2015-06-29 23:28:08 | 2015-06-30 00:38:20 | 2015-06-30 01:04:15 | 0:25:55 | 0:09:19 | 0:16:36 | vps | master | ubuntu | 12.04 | upgrade:hammer/older/{0-cluster/start.yaml 1-install/v0.94.1.yaml 2-workload/testrados.yaml 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-final/{monthrash.yaml osdthrash.yaml testrados.yaml} distros/ubuntu_12.04.yaml} | 2 | |
Failure Reason:
Command failed on vpm041 with status 1: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --max-ops 2000 --objects 50 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op snap_remove 50 --op snap_create 50 --op rollback 50 --op read 100 --op write 50 --op write_excl 50 --op delete 50 --pool unique_pool_0' |
||||||||||||||
pass | 954249 | 2015-06-29 23:28:09 | 2015-06-29 23:58:05 | 2015-06-30 01:48:21 | 1:50:16 | 1:37:19 | 0:12:57 | vps | master | ubuntu | 12.04 | upgrade:hammer/older/{0-cluster/start.yaml 1-install/v0.94.yaml 2-workload/blogbench.yaml 3-upgrade-sequence/upgrade-osd-mon-mds.yaml 4-final/{monthrash.yaml osdthrash.yaml testrados.yaml} distros/ubuntu_12.04.yaml} | 2 | |
pass | 954250 | 2015-06-29 23:28:11 | 2015-06-29 23:55:07 | 2015-06-30 01:39:27 | 1:44:20 | 1:31:37 | 0:12:43 | vps | master | ubuntu | 12.04 | upgrade:hammer/older/{0-cluster/start.yaml 1-install/latest_giant_release.yaml 2-workload/rbd.yaml 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-final/{monthrash.yaml osdthrash.yaml testrados.yaml} distros/ubuntu_12.04.yaml} | 2 |