User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Pass | Fail | Dead |
---|---|---|---|---|---|---|---|---|---|---|---|
teuthology | 2015-07-29 00:10:15 | 2015-07-29 00:42:20 | 2015-07-29 08:11:00 | 7:28:40 | upgrade:hammer-x | next | vps | — | 3 | 15 | 4 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
dead | 990365 | 2015-07-29 00:15:07 | 2015-07-29 00:44:15 | vps | master | rhel | 7.0 | upgrade:hammer-x/stress-split-erasure-code-x86_64/{0-cluster/start.yaml 1-hammer-install/hammer.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-mon/mona.yaml 5-workload/ec-rados-default.yaml 6-next-mon/monb.yaml 8-next-mon/monc.yaml 9-workload/ec-rados-plugin=isa-k=2-m=1.yaml distros/rhel_7.0.yaml} | — | |||||
dead | 990366 | 2015-07-29 00:15:09 | 2015-07-29 00:44:15 | vps | master | centos | 6.5 | upgrade:hammer-x/point-to-point-x/{point-to-point.yaml distros/centos_6.5.yaml} | — | |||||
fail | 990367 | 2015-07-29 00:15:10 | 2015-07-29 00:42:20 | 2015-07-29 03:14:29 | 2:32:09 | 2:23:42 | 0:08:27 | vps | master | centos | 6.5 | upgrade:hammer-x/stress-split/{0-cluster/start.yaml 1-hammer-install/hammer.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-mon/mona.yaml 5-workload/{rbd-cls.yaml rbd-import-export.yaml readwrite.yaml snaps-few-objects.yaml} 6-next-mon/monb.yaml 7-workload/{radosbench.yaml rbd_api.yaml} 8-next-mon/monc.yaml 9-workload/{rbd-python.yaml rgw-swift.yaml snaps-many-objects.yaml} distros/centos_6.5.yaml} | 3 | |
Failure Reason:
'wait_until_healthy' reached maximum tries (150) after waiting for 900 seconds |
||||||||||||||
fail | 990368 | 2015-07-29 00:15:12 | 2015-07-29 00:44:43 | 2015-07-29 03:18:53 | 2:34:10 | 2:27:45 | 0:06:25 | vps | master | centos | 6.5 | upgrade:hammer-x/stress-split-erasure-code/{0-cluster/start.yaml 1-hammer-install/hammer.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-mon/mona.yaml 5-workload/ec-rados-default.yaml 6-next-mon/monb.yaml 8-next-mon/monc.yaml 9-workload/ec-rados-plugin=jerasure-k=3-m=1.yaml distros/centos_6.5.yaml} | 3 | |
Failure Reason:
'wait_until_healthy' reached maximum tries (150) after waiting for 900 seconds |
||||||||||||||
fail | 990369 | 2015-07-29 00:15:13 | 2015-07-29 00:44:29 | 2015-07-29 08:11:00 | 7:26:31 | 4:07:29 | 3:19:02 | vps | master | centos | 6.5 | upgrade:hammer-x/parallel/{0-cluster/start.yaml 1-hammer-install/hammer.yaml 2-workload/{ec-rados-default.yaml rados_api.yaml rados_loadgenbig.yaml test_rbd_api.yaml test_rbd_python.yaml} 3-upgrade-sequence/upgrade-all.yaml 4-final-workload/{rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/centos_6.5.yaml} | 3 | |
Failure Reason:
Command failed (workunit test rbd/test_librbd.sh) on vpm124 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=hammer TESTDIR="/home/ubuntu/cephtest" CEPH_ID="0" PATH=$PATH:/usr/sbin adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/workunit.client.0/rbd/test_librbd.sh' |
||||||||||||||
fail | 990370 | 2015-07-29 00:15:14 | 2015-07-29 00:45:02 | 2015-07-29 04:15:19 | 3:30:17 | 0:44:32 | 2:45:45 | vps | master | debian | 7.0 | upgrade:hammer-x/parallel/{0-cluster/start.yaml 1-hammer-install/hammer.yaml 2-workload/{ec-rados-default.yaml rados_api.yaml rados_loadgenbig.yaml test_rbd_api.yaml test_rbd_python.yaml} 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-final-workload/{rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/debian_7.0.yaml} | 3 | |
Failure Reason:
'wait_until_healthy' reached maximum tries (150) after waiting for 900 seconds |
||||||||||||||
fail | 990371 | 2015-07-29 00:15:16 | 2015-07-29 00:44:49 | 2015-07-29 06:39:16 | 5:54:27 | 2:17:28 | 3:36:59 | vps | master | debian | 7.0 | upgrade:hammer-x/point-to-point-x/{point-to-point.yaml distros/debian_7.0.yaml} | 3 | |
Failure Reason:
'wait_until_healthy' reached maximum tries (150) after waiting for 900 seconds |
||||||||||||||
fail | 990372 | 2015-07-29 00:15:17 | 2015-07-29 00:44:35 | 2015-07-29 04:36:52 | 3:52:17 | 0:34:47 | 3:17:30 | vps | master | debian | 7.0 | upgrade:hammer-x/stress-split/{0-cluster/start.yaml 1-hammer-install/hammer.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-mon/mona.yaml 5-workload/{rbd-cls.yaml rbd-import-export.yaml readwrite.yaml snaps-few-objects.yaml} 6-next-mon/monb.yaml 7-workload/{radosbench.yaml rbd_api.yaml} 8-next-mon/monc.yaml 9-workload/{rbd-python.yaml rgw-swift.yaml snaps-many-objects.yaml} distros/debian_7.0.yaml} | 3 | |
Failure Reason:
'wait_until_healthy' reached maximum tries (150) after waiting for 900 seconds |
||||||||||||||
fail | 990373 | 2015-07-29 00:15:18 | 2015-07-29 00:45:36 | 2015-07-29 04:21:52 | 3:36:16 | 0:41:31 | 2:54:45 | vps | master | debian | 7.0 | upgrade:hammer-x/stress-split-erasure-code/{0-cluster/start.yaml 1-hammer-install/hammer.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-mon/mona.yaml 5-workload/ec-rados-default.yaml 6-next-mon/monb.yaml 8-next-mon/monc.yaml 9-workload/ec-rados-plugin=jerasure-k=3-m=1.yaml distros/debian_7.0.yaml} | 3 | |
Failure Reason:
'wait_until_healthy' reached maximum tries (150) after waiting for 900 seconds |
||||||||||||||
pass | 990374 | 2015-07-29 00:15:19 | 2015-07-29 00:47:08 | 2015-07-29 05:17:27 | 4:30:19 | 2:57:19 | 1:33:00 | vps | master | ubuntu | 12.04 | upgrade:hammer-x/parallel/{0-cluster/start.yaml 1-hammer-install/hammer.yaml 2-workload/{ec-rados-default.yaml rados_api.yaml rados_loadgenbig.yaml test_rbd_api.yaml test_rbd_python.yaml} 3-upgrade-sequence/upgrade-all.yaml 4-final-workload/{rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/ubuntu_12.04.yaml} | 3 | |
fail | 990375 | 2015-07-29 00:15:21 | 2015-07-29 00:47:54 | 2015-07-29 04:46:11 | 3:58:17 | 0:49:02 | 3:09:15 | vps | master | ubuntu | 14.04 | upgrade:hammer-x/parallel/{0-cluster/start.yaml 1-hammer-install/hammer.yaml 2-workload/{ec-rados-default.yaml rados_api.yaml rados_loadgenbig.yaml test_rbd_api.yaml test_rbd_python.yaml} 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-final-workload/{rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/ubuntu_14.04.yaml} | 3 | |
Failure Reason:
'wait_until_healthy' reached maximum tries (150) after waiting for 900 seconds |
||||||||||||||
dead | 990376 | 2015-07-29 00:15:22 | 2015-07-29 02:19:30 | 2015-07-29 03:07:32 | 0:48:02 | 0:34:06 | 0:13:56 | vps | master | ubuntu | 14.04 | upgrade:hammer-x/stress-split-erasure-code-x86_64/{0-cluster/start.yaml 1-hammer-install/hammer.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-mon/mona.yaml 5-workload/ec-rados-default.yaml 6-next-mon/monb.yaml 8-next-mon/monc.yaml 9-workload/ec-rados-plugin=isa-k=2-m=1.yaml distros/ubuntu_14.04.yaml} | 3 | |
Failure Reason:
SSH connection to vpm129 was lost: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=0.94.2-156-g8355bda-1trusty ceph-dbg=0.94.2-156-g8355bda-1trusty ceph-mds=0.94.2-156-g8355bda-1trusty ceph-mds-dbg=0.94.2-156-g8355bda-1trusty ceph-common=0.94.2-156-g8355bda-1trusty ceph-common-dbg=0.94.2-156-g8355bda-1trusty ceph-fuse=0.94.2-156-g8355bda-1trusty ceph-fuse-dbg=0.94.2-156-g8355bda-1trusty ceph-test=0.94.2-156-g8355bda-1trusty ceph-test-dbg=0.94.2-156-g8355bda-1trusty radosgw=0.94.2-156-g8355bda-1trusty radosgw-dbg=0.94.2-156-g8355bda-1trusty python-ceph=0.94.2-156-g8355bda-1trusty libcephfs1=0.94.2-156-g8355bda-1trusty libcephfs1-dbg=0.94.2-156-g8355bda-1trusty libcephfs-java=0.94.2-156-g8355bda-1trusty libcephfs-jni=0.94.2-156-g8355bda-1trusty librados2=0.94.2-156-g8355bda-1trusty librados2-dbg=0.94.2-156-g8355bda-1trusty librbd1=0.94.2-156-g8355bda-1trusty librbd1-dbg=0.94.2-156-g8355bda-1trusty rbd-fuse=0.94.2-156-g8355bda-1trusty librados2=0.94.2-156-g8355bda-1trusty librados2-dbg=0.94.2-156-g8355bda-1trusty librbd1=0.94.2-156-g8355bda-1trusty librbd1-dbg=0.94.2-156-g8355bda-1trusty' |
||||||||||||||
fail | 990377 | 2015-07-29 00:15:23 | 2015-07-29 02:29:19 | 2015-07-29 04:31:27 | 2:02:08 | 1:23:53 | 0:38:15 | vps | master | ubuntu | 12.04 | upgrade:hammer-x/point-to-point-x/{point-to-point.yaml distros/ubuntu_12.04.yaml} | 3 | |
Failure Reason:
Command failed on vpm016 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install librbd1-dbg=0.94.2-156-g8355bda-1precise ceph=0.94.2-156-g8355bda-1precise ceph-test=0.94.2-156-g8355bda-1precise ceph-dbg=0.94.2-156-g8355bda-1precise rbd-fuse=0.94.2-156-g8355bda-1precise librados2-dbg=0.94.2-156-g8355bda-1precise ceph-fuse-dbg=0.94.2-156-g8355bda-1precise libcephfs-jni=0.94.2-156-g8355bda-1precise libcephfs1-dbg=0.94.2-156-g8355bda-1precise radosgw=0.94.2-156-g8355bda-1precise librados2=0.94.2-156-g8355bda-1precise libcephfs1=0.94.2-156-g8355bda-1precise ceph-mds=0.94.2-156-g8355bda-1precise radosgw-dbg=0.94.2-156-g8355bda-1precise librbd1=0.94.2-156-g8355bda-1precise python-ceph=0.94.2-156-g8355bda-1precise ceph-test-dbg=0.94.2-156-g8355bda-1precise ceph-fuse=0.94.2-156-g8355bda-1precise ceph-common=0.94.2-156-g8355bda-1precise libcephfs-java=0.94.2-156-g8355bda-1precise ceph-common-dbg=0.94.2-156-g8355bda-1precise ceph-mds-dbg=0.94.2-156-g8355bda-1precise' |
||||||||||||||
fail | 990378 | 2015-07-29 00:15:25 | 2015-07-29 02:30:17 | 2015-07-29 03:42:22 | 1:12:05 | 0:46:34 | 0:25:31 | vps | master | ubuntu | 12.04 | upgrade:hammer-x/stress-split/{0-cluster/start.yaml 1-hammer-install/hammer.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-mon/mona.yaml 5-workload/{rbd-cls.yaml rbd-import-export.yaml readwrite.yaml snaps-few-objects.yaml} 6-next-mon/monb.yaml 7-workload/{radosbench.yaml rbd_api.yaml} 8-next-mon/monc.yaml 9-workload/{rbd-python.yaml rgw-swift.yaml snaps-many-objects.yaml} distros/ubuntu_12.04.yaml} | 3 | |
Failure Reason:
'wait_until_healthy' reached maximum tries (150) after waiting for 900 seconds |
||||||||||||||
dead | 990379 | 2015-07-29 00:15:26 | 2015-07-29 02:42:22 | 2015-07-29 03:46:26 | 1:04:04 | 0:32:47 | 0:31:17 | vps | master | ubuntu | 12.04 | upgrade:hammer-x/stress-split-erasure-code/{0-cluster/start.yaml 1-hammer-install/hammer.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-mon/mona.yaml 5-workload/ec-rados-default.yaml 6-next-mon/monb.yaml 8-next-mon/monc.yaml 9-workload/ec-rados-plugin=jerasure-k=3-m=1.yaml distros/ubuntu_12.04.yaml} | 3 | |
Failure Reason:
SSH connection to vpm129 was lost: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=0.94.2-156-g8355bda-1precise ceph-dbg=0.94.2-156-g8355bda-1precise ceph-mds=0.94.2-156-g8355bda-1precise ceph-mds-dbg=0.94.2-156-g8355bda-1precise ceph-common=0.94.2-156-g8355bda-1precise ceph-common-dbg=0.94.2-156-g8355bda-1precise ceph-fuse=0.94.2-156-g8355bda-1precise ceph-fuse-dbg=0.94.2-156-g8355bda-1precise ceph-test=0.94.2-156-g8355bda-1precise ceph-test-dbg=0.94.2-156-g8355bda-1precise radosgw=0.94.2-156-g8355bda-1precise radosgw-dbg=0.94.2-156-g8355bda-1precise python-ceph=0.94.2-156-g8355bda-1precise libcephfs1=0.94.2-156-g8355bda-1precise libcephfs1-dbg=0.94.2-156-g8355bda-1precise libcephfs-java=0.94.2-156-g8355bda-1precise libcephfs-jni=0.94.2-156-g8355bda-1precise librados2=0.94.2-156-g8355bda-1precise librados2-dbg=0.94.2-156-g8355bda-1precise librbd1=0.94.2-156-g8355bda-1precise librbd1-dbg=0.94.2-156-g8355bda-1precise rbd-fuse=0.94.2-156-g8355bda-1precise librados2=0.94.2-156-g8355bda-1precise librados2-dbg=0.94.2-156-g8355bda-1precise librbd1=0.94.2-156-g8355bda-1precise librbd1-dbg=0.94.2-156-g8355bda-1precise' |
||||||||||||||
fail | 990380 | 2015-07-29 00:15:27 | 2015-07-29 02:50:30 | 2015-07-29 03:50:34 | 1:00:04 | 0:22:14 | 0:37:50 | vps | master | centos | 6.5 | upgrade:hammer-x/parallel/{0-cluster/start.yaml 1-hammer-install/hammer.yaml 2-workload/{ec-rados-default.yaml rados_api.yaml rados_loadgenbig.yaml test_rbd_api.yaml test_rbd_python.yaml} 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-final-workload/{rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/centos_6.5.yaml} | 3 | |
Failure Reason:
Command failed with status 3: 'ansible-playbook -v --extra-vars \'{"ansible_ssh_user": "ubuntu"}\' -i /etc/ansible/hosts --limit vpm124.front.sepia.ceph.com,vpm136.front.sepia.ceph.com,vpm177.front.sepia.ceph.com /var/lib/teuthworker/src/ceph-cm-ansible_master/cephlab.yml' |
||||||||||||||
pass | 990381 | 2015-07-29 00:15:28 | 2015-07-29 02:51:13 | 2015-07-29 05:17:24 | 2:26:11 | 2:12:30 | 0:13:41 | vps | master | debian | 7.0 | upgrade:hammer-x/parallel/{0-cluster/start.yaml 1-hammer-install/hammer.yaml 2-workload/{ec-rados-default.yaml rados_api.yaml rados_loadgenbig.yaml test_rbd_api.yaml test_rbd_python.yaml} 3-upgrade-sequence/upgrade-all.yaml 4-final-workload/{rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/debian_7.0.yaml} | 3 | |
fail | 990382 | 2015-07-29 00:15:30 | 2015-07-29 03:03:33 | 2015-07-29 04:47:40 | 1:44:07 | 1:28:14 | 0:15:53 | vps | master | ubuntu | 14.04 | upgrade:hammer-x/point-to-point-x/{point-to-point.yaml distros/ubuntu_14.04.yaml} | 3 | |
Failure Reason:
Command failed on vpm118 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install librbd1-dbg=0.94.2-156-g8355bda-1trusty ceph=0.94.2-156-g8355bda-1trusty ceph-test=0.94.2-156-g8355bda-1trusty ceph-dbg=0.94.2-156-g8355bda-1trusty rbd-fuse=0.94.2-156-g8355bda-1trusty librados2-dbg=0.94.2-156-g8355bda-1trusty ceph-fuse-dbg=0.94.2-156-g8355bda-1trusty libcephfs-jni=0.94.2-156-g8355bda-1trusty libcephfs1-dbg=0.94.2-156-g8355bda-1trusty radosgw=0.94.2-156-g8355bda-1trusty librados2=0.94.2-156-g8355bda-1trusty libcephfs1=0.94.2-156-g8355bda-1trusty ceph-mds=0.94.2-156-g8355bda-1trusty radosgw-dbg=0.94.2-156-g8355bda-1trusty librbd1=0.94.2-156-g8355bda-1trusty python-ceph=0.94.2-156-g8355bda-1trusty ceph-test-dbg=0.94.2-156-g8355bda-1trusty ceph-fuse=0.94.2-156-g8355bda-1trusty ceph-common=0.94.2-156-g8355bda-1trusty libcephfs-java=0.94.2-156-g8355bda-1trusty ceph-common-dbg=0.94.2-156-g8355bda-1trusty ceph-mds-dbg=0.94.2-156-g8355bda-1trusty' |
||||||||||||||
fail | 990383 | 2015-07-29 00:15:31 | 2015-07-29 03:04:49 | 2015-07-29 04:06:53 | 1:02:04 | 0:49:42 | 0:12:22 | vps | master | ubuntu | 14.04 | upgrade:hammer-x/stress-split/{0-cluster/start.yaml 1-hammer-install/hammer.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-mon/mona.yaml 5-workload/{rbd-cls.yaml rbd-import-export.yaml readwrite.yaml snaps-few-objects.yaml} 6-next-mon/monb.yaml 7-workload/{radosbench.yaml rbd_api.yaml} 8-next-mon/monc.yaml 9-workload/{rbd-python.yaml rgw-swift.yaml snaps-many-objects.yaml} distros/ubuntu_14.04.yaml} | 3 | |
Failure Reason:
'wait_until_healthy' reached maximum tries (150) after waiting for 900 seconds |
||||||||||||||
fail | 990384 | 2015-07-29 00:15:32 | 2015-07-29 03:07:41 | 2015-07-29 04:05:44 | 0:58:03 | 0:51:17 | 0:06:46 | vps | master | ubuntu | 14.04 | upgrade:hammer-x/stress-split-erasure-code/{0-cluster/start.yaml 1-hammer-install/hammer.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-mon/mona.yaml 5-workload/ec-rados-default.yaml 6-next-mon/monb.yaml 8-next-mon/monc.yaml 9-workload/ec-rados-plugin=jerasure-k=3-m=1.yaml distros/ubuntu_14.04.yaml} | 3 | |
Failure Reason:
'wait_until_healthy' reached maximum tries (150) after waiting for 900 seconds |
||||||||||||||
fail | 990385 | 2015-07-29 00:15:33 | 2015-07-29 03:10:10 | 2015-07-29 04:32:15 | 1:22:05 | 0:15:51 | 1:06:14 | vps | master | ubuntu | 12.04 | upgrade:hammer-x/parallel/{0-cluster/start.yaml 1-hammer-install/hammer.yaml 2-workload/{ec-rados-default.yaml rados_api.yaml rados_loadgenbig.yaml test_rbd_api.yaml test_rbd_python.yaml} 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-final-workload/{rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/ubuntu_12.04.yaml} | 3 | |
Failure Reason:
Command failed with status 3: 'ansible-playbook -v --extra-vars \'{"ansible_ssh_user": "ubuntu"}\' -i /etc/ansible/hosts --limit vpm097.front.sepia.ceph.com,vpm158.front.sepia.ceph.com,vpm098.front.sepia.ceph.com /var/lib/teuthworker/src/ceph-cm-ansible_master/cephlab.yml' |
||||||||||||||
pass | 990386 | 2015-07-29 00:15:34 | 2015-07-29 03:11:12 | 2015-07-29 07:01:30 | 3:50:18 | 2:23:18 | 1:27:00 | vps | master | ubuntu | 14.04 | upgrade:hammer-x/parallel/{0-cluster/start.yaml 1-hammer-install/hammer.yaml 2-workload/{ec-rados-default.yaml rados_api.yaml rados_loadgenbig.yaml test_rbd_api.yaml test_rbd_python.yaml} 3-upgrade-sequence/upgrade-all.yaml 4-final-workload/{rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/ubuntu_14.04.yaml} | 3 |