User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Pass | Fail | Dead |
---|---|---|---|---|---|---|---|---|---|---|---|
trociny | 2016-11-03 06:15:19 | 2016-11-03 06:17:14 | 2016-11-03 18:19:42 | 12:02:28 | upgrade | wip-mgolub-testing | vps | d861e59 | 3 | 30 | 2 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
fail | 513797 | 2016-11-03 06:16:09 | 2016-11-03 06:17:12 | 2016-11-03 06:35:11 | 0:17:59 | 0:15:38 | 0:02:21 | vps | master | upgrade/client-upgrade/firefly-client-x/basic/{0-cluster/start.yaml 1-install/firefly-client-x.yaml 2-workload/rbd_cli_import_export.yaml} | 3 | |||
Failure Reason:
Command failed (workunit test rbd/import_export.sh) on vpm119 with status 95: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=firefly TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin RBD_CREATE_ARGS=--new-format adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/workunit.client.0/rbd/import_export.sh' |
||||||||||||||
fail | 513798 | 2016-11-03 06:16:09 | 2016-11-03 06:17:15 | 2016-11-03 09:47:18 | 3:30:03 | 1:40:53 | 1:49:10 | vps | master | centos | 7.2 | upgrade/firefly-hammer-x/parallel/{0-cluster/start.yaml 1-firelfy-hammer-install/firefly-hammer.yaml 2-workload/{rados_api.yaml rados_loadgenbig.yaml test_rbd_api.yaml test_rbd_python.yaml} 3-upgrade-sequence/upgrade-all.yaml 4-firefly-hammer-x-upgrade/firefly-hammer-x.yaml 5-workload/{rados_api.yaml rados_loadgenbig.yaml test_rbd_api.yaml test_rbd_python.yaml} 6-upgrade-sequence/upgrade-all.yaml 7-final-workload/{ec-rados-plugin=jerasure-k=2-m=1.yaml ec-rados-plugin=jerasure-k=3-m=1.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_s3tests.yaml} distros/centos_7.2.yaml} | 3 | |
Failure Reason:
Command failed (workunit test rados/test-upgrade-v9.0.1.sh) on vpm071 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=hammer TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/workunit.client.0/rados/test-upgrade-v9.0.1.sh' |
||||||||||||||
fail | 513799 | 2016-11-03 06:16:10 | 2016-11-03 06:17:12 | 2016-11-03 09:45:16 | 3:28:04 | 3:20:55 | 0:07:09 | vps | master | centos | 7.2 | upgrade/infernalis-x/parallel/{4-jewel.yaml 0-cluster/{openstack.yaml start.yaml} 1-infernalis-install/infernalis.yaml 2-workload/{blogbench.yaml ec-rados-default.yaml rados_api.yaml rados_loadgenbig.yaml test_rbd_api.yaml test_rbd_python.yaml} 3-upgrade-sequence/upgrade-all.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/centos_7.2.yaml} | 3 | |
Failure Reason:
'wait_until_healthy' reached maximum tries (150) after waiting for 900 seconds |
||||||||||||||
dead | 513800 | 2016-11-03 06:16:10 | 2016-11-03 06:17:13 | 2016-11-03 18:19:37 | 12:02:24 | vps | master | centos | 7.2 | upgrade/jewel-x/parallel/{kraken.yaml 0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 2-workload/{blogbench.yaml ec-rados-default.yaml rados_api.yaml rados_loadgenbig.yaml test_rbd_api.yaml test_rbd_python.yaml} 3-upgrade-sequence/upgrade-all.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/centos_7.2.yaml} | 3 | |||
fail | 513801 | 2016-11-03 06:16:11 | 2016-11-03 06:17:15 | 2016-11-03 10:19:18 | 4:02:03 | 2:05:31 | 1:56:32 | vps | master | centos | 7.2 | upgrade/hammer-x/parallel/{0-tz-eastern.yaml 4-infernalis.yaml 0-cluster/start.yaml 1-hammer-install/hammer.yaml 2-workload/{blogbench.yaml ec-rados-default.yaml rados_api.yaml rados_loadgenbig.yaml test_rbd_api.yaml test_rbd_python.yaml} 3-upgrade-sequence/upgrade-all.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/centos_7.2.yaml} | 3 | |
Failure Reason:
'wait_until_healthy' reached maximum tries (150) after waiting for 900 seconds |
||||||||||||||
fail | 513802 | 2016-11-03 06:16:12 | 2016-11-03 06:17:13 | 2016-11-03 08:51:15 | 2:34:02 | 2:28:23 | 0:05:39 | vps | master | centos | 7.2 | upgrade/firefly-hammer-x/stress-split/{00-cluster/start.yaml 01-firefly-install/firefly.yaml 02-partial-upgrade-hammer/firsthalf.yaml 03-workload/rbd.yaml 04-mona-upgrade-hammer/mona.yaml 05-workload/{rbd-cls.yaml readwrite.yaml} 06-monb-upgrade-hammer/monb.yaml 07-workload/{radosbench.yaml rbd_api.yaml} 08-monc-upgrade-hammer/monc.yaml 09-workload/rbd-python.yaml 10-osds-upgrade-hammer/secondhalf.yaml 11-workload/snaps-few-objects.yaml 12-partial-upgrade-x/first.yaml 13-workload/rados_loadgen_big.yaml 14-mona-upgrade-x/mona.yaml 15-workload/rbd-import-export.yaml 16-monb-upgrade-x/monb.yaml 17-workload/readwrite.yaml 18-monc-upgrade-x/monc.yaml 19-workload/radosbench.yaml 20-osds-upgrade-x/osds_secondhalf.yaml 21-final-workload/{rados_stress_watch.yaml rbd_cls_tests.yaml rgw-swift.yaml} distros/centos_7.2.yaml} | 3 | |
Failure Reason:
'wait_until_healthy' reached maximum tries (150) after waiting for 900 seconds |
||||||||||||||
pass | 513803 | 2016-11-03 06:16:12 | 2016-11-03 06:17:13 | 2016-11-03 10:29:17 | 4:12:04 | 4:06:23 | 0:05:41 | vps | master | centos | 7.2 | upgrade/hammer-x/stress-split/{0-tz-eastern.yaml 0-cluster/{openstack.yaml start.yaml} 1-hammer-install/hammer.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-mon/mona.yaml 5-workload/{rbd-cls.yaml rbd-import-export.yaml readwrite.yaml snaps-few-objects.yaml} 6-next-mon/monb.yaml 7-workload/{radosbench.yaml rbd_api.yaml} 8-next-mon/monc.yaml 9-workload/{rbd-python.yaml rgw-swift.yaml snaps-many-objects.yaml} distros/centos_7.2.yaml} | 3 | |
pass | 513804 | 2016-11-03 06:16:13 | 2016-11-03 06:17:13 | 2016-11-03 09:45:16 | 3:28:03 | 3:21:00 | 0:07:03 | vps | master | centos | 7.2 | upgrade/jewel-x/stress-split/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-mon/mona.yaml 5-workload/{rbd-cls.yaml rbd-import-export.yaml readwrite.yaml snaps-few-objects.yaml} 6-next-mon/monb.yaml 7-workload/{radosbench.yaml rbd_api.yaml} 8-next-mon/monc.yaml 9-workload/{rbd-python.yaml rgw-swift.yaml snaps-many-objects.yaml} distros/centos_7.2.yaml} | 3 | |
fail | 513805 | 2016-11-03 06:16:13 | 2016-11-03 06:17:13 | 2016-11-03 06:37:12 | 0:19:59 | 0:16:17 | 0:03:42 | vps | master | upgrade/client-upgrade/hammer-client-x/basic/{0-cluster/start.yaml 1-install/hammer-client-x.yaml 2-workload/rbd_api_tests.yaml} | 3 | |||
Failure Reason:
Command failed (workunit test rbd/test_librbd_api.sh) on vpm179 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=hammer TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin RBD_FEATURES=13 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/workunit.client.0/rbd/test_librbd_api.sh' |
||||||||||||||
dead | 513806 | 2016-11-03 06:16:14 | 2016-11-03 06:17:13 | 2016-11-03 18:19:42 | 12:02:29 | vps | master | centos | 7.2 | upgrade/infernalis-x/stress-split/{0-cluster/{openstack.yaml start.yaml} 1-infernalis-install/infernalis.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-mon/mona.yaml 5-workload/{rbd-cls.yaml rbd-import-export.yaml readwrite.yaml snaps-few-objects.yaml} 6-next-mon/monb.yaml 7-workload/{radosbench.yaml rbd_api.yaml} 8-next-mon/monc.yaml 9-workload/{rbd-python.yaml rgw-swift.yaml snaps-many-objects.yaml} distros/centos_7.2.yaml} | 3 | |||
fail | 513807 | 2016-11-03 06:16:15 | 2016-11-03 06:17:13 | 2016-11-03 07:59:14 | 1:42:01 | 1:38:36 | 0:03:25 | vps | master | ubuntu | 14.04 | upgrade/firefly-hammer-x/parallel/{0-cluster/start.yaml 1-firelfy-hammer-install/firefly-hammer.yaml 2-workload/{rados_api.yaml rados_loadgenbig.yaml test_rbd_api.yaml test_rbd_python.yaml} 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-firefly-hammer-x-upgrade/firefly-hammer-x.yaml 5-workload/{rados_api.yaml rados_loadgenbig.yaml test_rbd_api.yaml test_rbd_python.yaml} 6-upgrade-sequence/upgrade-by-daemon.yaml 7-final-workload/{ec-rados-plugin=jerasure-k=2-m=1.yaml ec-rados-plugin=jerasure-k=3-m=1.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_s3tests.yaml} distros/ubuntu_14.04.yaml} | 3 | |
Failure Reason:
Command failed (workunit test rados/test-upgrade-v9.0.1.sh) on vpm083 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=hammer TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/workunit.client.0/rados/test-upgrade-v9.0.1.sh' |
||||||||||||||
fail | 513808 | 2016-11-03 06:16:15 | 2016-11-03 06:17:13 | 2016-11-03 06:33:12 | 0:15:59 | 0:13:39 | 0:02:20 | vps | master | upgrade/client-upgrade/infernalis-client-x/basic/{0-cluster/start.yaml 1-install/infernalis-client-x.yaml 2-workload/rbd_api_tests.yaml} | 2 | |||
Failure Reason:
Command failed (workunit test rbd/test_librbd_api.sh) on vpm047 with status 127: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=wip-mgolub-testing-infernalis TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin RBD_FEATURES=13 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/workunit.client.0/rbd/test_librbd_api.sh' |
||||||||||||||
fail | 513809 | 2016-11-03 06:16:16 | 2016-11-03 06:17:15 | 2016-11-03 08:27:16 | 2:10:01 | 1:46:42 | 0:23:19 | vps | master | centos | 7.2 | upgrade/firefly-hammer-x/parallel/{0-cluster/start.yaml 1-firelfy-hammer-install/firefly-hammer.yaml 2-workload/{rados_api.yaml rados_loadgenbig.yaml test_rbd_api.yaml test_rbd_python.yaml} 3-upgrade-sequence/upgrade-all.yaml 4-firefly-hammer-x-upgrade/firefly-hammer-x.yaml 5-workload/{rados_api.yaml rados_loadgenbig.yaml test_rbd_api.yaml test_rbd_python.yaml} 6-upgrade-sequence/upgrade-by-daemon.yaml 7-final-workload/{ec-rados-plugin=jerasure-k=2-m=1.yaml ec-rados-plugin=jerasure-k=3-m=1.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_s3tests.yaml} distros/centos_7.2.yaml} | 3 | |
Failure Reason:
Command failed (workunit test rados/test-upgrade-v9.0.1.sh) on vpm109 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=hammer TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/workunit.client.0/rados/test-upgrade-v9.0.1.sh' |
||||||||||||||
fail | 513810 | 2016-11-03 06:16:16 | 2016-11-03 06:17:14 | 2016-11-03 08:15:14 | 1:58:00 | 1:41:19 | 0:16:41 | vps | master | ubuntu | 14.04 | upgrade/hammer-x/parallel/{0-tz-eastern.yaml 4-infernalis.yaml 0-cluster/start.yaml 1-hammer-install/hammer.yaml 2-workload/{blogbench.yaml ec-rados-default.yaml rados_api.yaml rados_loadgenbig.yaml test_rbd_api.yaml test_rbd_python.yaml} 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/ubuntu_14.04.yaml} | 3 | |
Failure Reason:
'wait_until_healthy' reached maximum tries (150) after waiting for 900 seconds |
||||||||||||||
fail | 513811 | 2016-11-03 06:16:17 | 2016-11-03 06:17:13 | 2016-11-03 07:55:13 | 1:38:00 | 1:35:17 | 0:02:43 | vps | master | ubuntu | 14.04 | upgrade/firefly-hammer-x/parallel/{0-cluster/start.yaml 1-firelfy-hammer-install/firefly-hammer.yaml 2-workload/{rados_api.yaml rados_loadgenbig.yaml test_rbd_api.yaml test_rbd_python.yaml} 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-firefly-hammer-x-upgrade/firefly-hammer-x.yaml 5-workload/{rados_api.yaml rados_loadgenbig.yaml test_rbd_api.yaml test_rbd_python.yaml} 6-upgrade-sequence/upgrade-all.yaml 7-final-workload/{ec-rados-plugin=jerasure-k=2-m=1.yaml ec-rados-plugin=jerasure-k=3-m=1.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_s3tests.yaml} distros/ubuntu_14.04.yaml} | 3 | |
Failure Reason:
Command failed (workunit test rados/test-upgrade-v9.0.1.sh) on vpm117 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=hammer TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/workunit.client.0/rados/test-upgrade-v9.0.1.sh' |
||||||||||||||
fail | 513812 | 2016-11-03 06:16:18 | 2016-11-03 06:17:14 | 2016-11-03 09:49:16 | 3:32:02 | 3:29:01 | 0:03:01 | vps | master | upgrade/client-upgrade/hammer-client-x/rbd/{0-cluster/start.yaml 1-install/hammer-client-x.yaml 2-workload/rbd_notification_tests.yaml} | 2 | |||
Failure Reason:
Command failed (workunit test rbd/notify_slave.sh) on vpm155 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && cd -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=hammer TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="1" PATH=$PATH:/usr/sbin RBD_FEATURES=13 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/workunit.client.1/rbd/notify_slave.sh' |
||||||||||||||
fail | 513813 | 2016-11-03 06:16:18 | 2016-11-03 06:17:14 | 2016-11-03 07:53:14 | 1:36:00 | 1:32:07 | 0:03:53 | vps | master | ubuntu | 14.04 | upgrade/jewel-x/parallel/{kraken.yaml 0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 2-workload/{blogbench.yaml ec-rados-default.yaml rados_api.yaml rados_loadgenbig.yaml test_rbd_api.yaml test_rbd_python.yaml} 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/ubuntu_14.04.yaml} | 3 | |
Failure Reason:
'wait_until_healthy' reached maximum tries (150) after waiting for 900 seconds |
||||||||||||||
fail | 513814 | 2016-11-03 06:16:19 | 2016-11-03 06:17:13 | 2016-11-03 08:55:15 | 2:38:02 | 2:34:57 | 0:03:05 | vps | master | ubuntu | 14.04 | upgrade/infernalis-x/parallel/{4-jewel.yaml 0-cluster/{openstack.yaml start.yaml} 1-infernalis-install/infernalis.yaml 2-workload/{blogbench.yaml ec-rados-default.yaml rados_api.yaml rados_loadgenbig.yaml test_rbd_api.yaml test_rbd_python.yaml} 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/ubuntu_14.04.yaml} | 3 | |
Failure Reason:
'wait_until_healthy' reached maximum tries (150) after waiting for 900 seconds |
||||||||||||||
fail | 513815 | 2016-11-03 06:16:20 | 2016-11-03 06:17:14 | 2016-11-03 08:11:14 | 1:54:00 | 1:47:21 | 0:06:39 | vps | master | centos | 7.2 | upgrade/firefly-hammer-x/parallel/{0-cluster/start.yaml 1-firelfy-hammer-install/firefly-hammer.yaml 2-workload/{rados_api.yaml rados_loadgenbig.yaml test_rbd_api.yaml test_rbd_python.yaml} 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-firefly-hammer-x-upgrade/firefly-hammer-x.yaml 5-workload/{rados_api.yaml rados_loadgenbig.yaml test_rbd_api.yaml test_rbd_python.yaml} 6-upgrade-sequence/upgrade-all.yaml 7-final-workload/{ec-rados-plugin=jerasure-k=2-m=1.yaml ec-rados-plugin=jerasure-k=3-m=1.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_s3tests.yaml} distros/centos_7.2.yaml} | 3 | |
Failure Reason:
Command failed (workunit test rados/test-upgrade-v9.0.1.sh) on vpm049 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=hammer TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/workunit.client.0/rados/test-upgrade-v9.0.1.sh' |
||||||||||||||
fail | 513816 | 2016-11-03 06:16:20 | 2016-11-03 06:17:14 | 2016-11-03 10:37:18 | 4:20:04 | 2:36:30 | 1:43:34 | vps | master | centos | 7.2 | upgrade/infernalis-x/parallel/{4-jewel.yaml 0-cluster/{openstack.yaml start.yaml} 1-infernalis-install/infernalis.yaml 2-workload/{blogbench.yaml ec-rados-default.yaml rados_api.yaml rados_loadgenbig.yaml test_rbd_api.yaml test_rbd_python.yaml} 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/centos_7.2.yaml} | 3 | |
Failure Reason:
'wait_until_healthy' reached maximum tries (150) after waiting for 900 seconds |
||||||||||||||
fail | 513817 | 2016-11-03 06:16:21 | 2016-11-03 06:17:14 | 2016-11-03 09:47:17 | 3:30:03 | 1:34:12 | 1:55:51 | vps | master | centos | 7.2 | upgrade/jewel-x/parallel/{kraken.yaml 0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 2-workload/{blogbench.yaml ec-rados-default.yaml rados_api.yaml rados_loadgenbig.yaml test_rbd_api.yaml test_rbd_python.yaml} 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/centos_7.2.yaml} | 3 | |
Failure Reason:
'wait_until_healthy' reached maximum tries (150) after waiting for 900 seconds |
||||||||||||||
fail | 513818 | 2016-11-03 06:16:22 | 2016-11-03 06:17:13 | 2016-11-03 09:55:17 | 3:38:04 | 3:35:03 | 0:03:01 | vps | master | upgrade/client-upgrade/infernalis-client-x/rbd/{0-cluster/start.yaml 1-install/infernalis-client-x.yaml 2-workload/rbd_notification_tests.yaml} | 2 | |||
Failure Reason:
Command failed (workunit test rbd/notify_slave.sh) on vpm087 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && cd -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=infernalis TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="1" PATH=$PATH:/usr/sbin RBD_FEATURES=13 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/workunit.client.1/rbd/notify_slave.sh' |
||||||||||||||
fail | 513819 | 2016-11-03 06:16:22 | 2016-11-03 06:17:14 | 2016-11-03 10:25:18 | 4:08:04 | 2:24:56 | 1:43:08 | vps | master | ubuntu | 14.04 | upgrade/firefly-hammer-x/stress-split/{00-cluster/start.yaml 01-firefly-install/firefly.yaml 02-partial-upgrade-hammer/firsthalf.yaml 03-workload/rbd.yaml 04-mona-upgrade-hammer/mona.yaml 05-workload/{rbd-cls.yaml readwrite.yaml} 06-monb-upgrade-hammer/monb.yaml 07-workload/{radosbench.yaml rbd_api.yaml} 08-monc-upgrade-hammer/monc.yaml 09-workload/rbd-python.yaml 10-osds-upgrade-hammer/secondhalf.yaml 11-workload/snaps-few-objects.yaml 12-partial-upgrade-x/first.yaml 13-workload/rados_loadgen_big.yaml 14-mona-upgrade-x/mona.yaml 15-workload/rbd-import-export.yaml 16-monb-upgrade-x/monb.yaml 17-workload/readwrite.yaml 18-monc-upgrade-x/monc.yaml 19-workload/radosbench.yaml 20-osds-upgrade-x/osds_secondhalf.yaml 21-final-workload/{rados_stress_watch.yaml rbd_cls_tests.yaml rgw-swift.yaml} distros/ubuntu_14.04.yaml} | 3 | |
Failure Reason:
'wait_until_healthy' reached maximum tries (150) after waiting for 900 seconds |
||||||||||||||
fail | 513820 | 2016-11-03 06:16:23 | 2016-11-03 06:17:14 | 2016-11-03 08:27:16 | 2:10:02 | 1:44:24 | 0:25:38 | vps | master | centos | 7.2 | upgrade/hammer-x/parallel/{0-tz-eastern.yaml 4-infernalis.yaml 0-cluster/start.yaml 1-hammer-install/hammer.yaml 2-workload/{blogbench.yaml ec-rados-default.yaml rados_api.yaml rados_loadgenbig.yaml test_rbd_api.yaml test_rbd_python.yaml} 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/centos_7.2.yaml} | 3 | |
Failure Reason:
'wait_until_healthy' reached maximum tries (150) after waiting for 900 seconds |
||||||||||||||
pass | 513821 | 2016-11-03 06:16:24 | 2016-11-03 06:17:14 | 2016-11-03 09:59:17 | 3:42:03 | 3:21:25 | 0:20:38 | vps | master | ubuntu | 14.04 | upgrade/hammer-x/stress-split/{0-tz-eastern.yaml 0-cluster/{openstack.yaml start.yaml} 1-hammer-install/hammer.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-mon/mona.yaml 5-workload/{rbd-cls.yaml rbd-import-export.yaml readwrite.yaml snaps-few-objects.yaml} 6-next-mon/monb.yaml 7-workload/{radosbench.yaml rbd_api.yaml} 8-next-mon/monc.yaml 9-workload/{rbd-python.yaml rgw-swift.yaml snaps-many-objects.yaml} distros/ubuntu_14.04.yaml} | 3 | |
fail | 513822 | 2016-11-03 06:16:24 | 2016-11-03 06:17:14 | 2016-11-03 08:13:14 | 1:56:00 | 1:53:22 | 0:02:38 | vps | master | ubuntu | 14.04 | upgrade/jewel-x/stress-split/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-mon/mona.yaml 5-workload/{rbd-cls.yaml rbd-import-export.yaml readwrite.yaml snaps-few-objects.yaml} 6-next-mon/monb.yaml 7-workload/{radosbench.yaml rbd_api.yaml} 8-next-mon/monc.yaml 9-workload/{rbd-python.yaml rgw-swift.yaml snaps-many-objects.yaml} distros/ubuntu_14.04.yaml} | 3 | |
Failure Reason:
Command failed on vpm113 with status 1: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-mon -f --cluster ceph -i c' |
||||||||||||||
fail | 513823 | 2016-11-03 06:16:25 | 2016-11-03 06:17:14 | 2016-11-03 07:59:14 | 1:42:00 | 1:38:22 | 0:03:38 | vps | master | ubuntu | 14.04 | upgrade/firefly-hammer-x/parallel/{0-cluster/start.yaml 1-firelfy-hammer-install/firefly-hammer.yaml 2-workload/{rados_api.yaml rados_loadgenbig.yaml test_rbd_api.yaml test_rbd_python.yaml} 3-upgrade-sequence/upgrade-all.yaml 4-firefly-hammer-x-upgrade/firefly-hammer-x.yaml 5-workload/{rados_api.yaml rados_loadgenbig.yaml test_rbd_api.yaml test_rbd_python.yaml} 6-upgrade-sequence/upgrade-by-daemon.yaml 7-final-workload/{ec-rados-plugin=jerasure-k=2-m=1.yaml ec-rados-plugin=jerasure-k=3-m=1.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_s3tests.yaml} distros/ubuntu_14.04.yaml} | 3 | |
Failure Reason:
Command failed (workunit test rados/test-upgrade-v9.0.1.sh) on vpm003 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=hammer TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/workunit.client.0/rados/test-upgrade-v9.0.1.sh' |
||||||||||||||
fail | 513824 | 2016-11-03 06:16:25 | 2016-11-03 06:17:13 | 2016-11-03 06:35:12 | 0:17:59 | 0:15:28 | 0:02:31 | vps | master | upgrade/client-upgrade/hammer-client-x/basic/{0-cluster/start.yaml 1-install/hammer-client-x.yaml 2-workload/rbd_cli_import_export.yaml} | 3 | |||
Failure Reason:
Command failed (workunit test rbd/import_export.sh) on vpm097 with status 95: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=hammer TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin RBD_CREATE_ARGS=\'--image-feature layering,exclusive-lock,object-map\' adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/workunit.client.0/rbd/import_export.sh' |
||||||||||||||
fail | 513825 | 2016-11-03 06:16:26 | 2016-11-03 06:17:14 | 2016-11-03 06:31:12 | 0:13:58 | 0:11:14 | 0:02:44 | vps | master | ubuntu | 14.04 | upgrade/infernalis-x/stress-split/{0-cluster/{openstack.yaml start.yaml} 1-infernalis-install/infernalis.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-mon/mona.yaml 5-workload/{rbd-cls.yaml rbd-import-export.yaml readwrite.yaml snaps-few-objects.yaml} 6-next-mon/monb.yaml 7-workload/{radosbench.yaml rbd_api.yaml} 8-next-mon/monc.yaml 9-workload/{rbd-python.yaml rgw-swift.yaml snaps-many-objects.yaml} distros/ubuntu_14.04.yaml} | 3 | |
Failure Reason:
{'vpm037.front.sepia.ceph.com': {'msg': 'One or more items failed', 'failed': True, 'changed': False}} |
||||||||||||||
fail | 513826 | 2016-11-03 06:16:27 | 2016-11-03 06:17:13 | 2016-11-03 08:25:14 | 2:08:01 | 2:01:01 | 0:07:00 | vps | master | centos | 7.2 | upgrade/firefly-hammer-x/parallel/{0-cluster/start.yaml 1-firelfy-hammer-install/firefly-hammer.yaml 2-workload/{rados_api.yaml rados_loadgenbig.yaml test_rbd_api.yaml test_rbd_python.yaml} 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-firefly-hammer-x-upgrade/firefly-hammer-x.yaml 5-workload/{rados_api.yaml rados_loadgenbig.yaml test_rbd_api.yaml test_rbd_python.yaml} 6-upgrade-sequence/upgrade-by-daemon.yaml 7-final-workload/{ec-rados-plugin=jerasure-k=2-m=1.yaml ec-rados-plugin=jerasure-k=3-m=1.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_s3tests.yaml} distros/centos_7.2.yaml} | 3 | |
Failure Reason:
Command failed (workunit test rados/test-upgrade-v9.0.1.sh) on vpm067 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=hammer TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/workunit.client.0/rados/test-upgrade-v9.0.1.sh' |
||||||||||||||
fail | 513827 | 2016-11-03 06:16:27 | 2016-11-03 06:17:14 | 2016-11-03 08:15:15 | 1:58:01 | 1:40:28 | 0:17:33 | vps | master | ubuntu | 14.04 | upgrade/hammer-x/parallel/{0-tz-eastern.yaml 4-infernalis.yaml 0-cluster/start.yaml 1-hammer-install/hammer.yaml 2-workload/{blogbench.yaml ec-rados-default.yaml rados_api.yaml rados_loadgenbig.yaml test_rbd_api.yaml test_rbd_python.yaml} 3-upgrade-sequence/upgrade-all.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/ubuntu_14.04.yaml} | 3 | |
Failure Reason:
'wait_until_healthy' reached maximum tries (150) after waiting for 900 seconds |
||||||||||||||
fail | 513828 | 2016-11-03 06:16:28 | 2016-11-03 06:17:12 | 2016-11-03 06:33:12 | 0:16:00 | 0:13:38 | 0:02:22 | vps | master | upgrade/client-upgrade/infernalis-client-x/basic/{0-cluster/start.yaml 1-install/infernalis-client-x.yaml 2-workload/rbd_cli_import_export.yaml} | 2 | |||
Failure Reason:
Command failed (workunit test rbd/import_export.sh) on vpm111 with status 95: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=infernalis TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin RBD_CREATE_ARGS=\'--image-feature layering,exclusive-lock,object-map\' adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/workunit.client.0/rbd/import_export.sh' |
||||||||||||||
fail | 513829 | 2016-11-03 06:16:29 | 2016-11-03 06:17:14 | 2016-11-03 09:29:16 | 3:12:02 | 1:32:06 | 1:39:56 | vps | master | ubuntu | 14.04 | upgrade/jewel-x/parallel/{kraken.yaml 0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 2-workload/{blogbench.yaml ec-rados-default.yaml rados_api.yaml rados_loadgenbig.yaml test_rbd_api.yaml test_rbd_python.yaml} 3-upgrade-sequence/upgrade-all.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/ubuntu_14.04.yaml} | 3 | |
Failure Reason:
'wait_until_healthy' reached maximum tries (150) after waiting for 900 seconds |
||||||||||||||
fail | 513830 | 2016-11-03 06:16:29 | 2016-11-03 06:17:14 | 2016-11-03 08:05:14 | 1:48:00 | 1:44:32 | 0:03:28 | vps | master | ubuntu | 14.04 | upgrade/firefly-hammer-x/parallel/{0-cluster/start.yaml 1-firelfy-hammer-install/firefly-hammer.yaml 2-workload/{rados_api.yaml rados_loadgenbig.yaml test_rbd_api.yaml test_rbd_python.yaml} 3-upgrade-sequence/upgrade-all.yaml 4-firefly-hammer-x-upgrade/firefly-hammer-x.yaml 5-workload/{rados_api.yaml rados_loadgenbig.yaml test_rbd_api.yaml test_rbd_python.yaml} 6-upgrade-sequence/upgrade-all.yaml 7-final-workload/{ec-rados-plugin=jerasure-k=2-m=1.yaml ec-rados-plugin=jerasure-k=3-m=1.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_s3tests.yaml} distros/ubuntu_14.04.yaml} | 3 | |
Failure Reason:
Command failed (workunit test rados/test-upgrade-v9.0.1.sh) on vpm071 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=hammer TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/workunit.client.0/rados/test-upgrade-v9.0.1.sh' |
||||||||||||||
fail | 513831 | 2016-11-03 06:16:30 | 2016-11-03 06:17:13 | 2016-11-03 08:47:15 | 2:30:02 | 2:27:18 | 0:02:44 | vps | master | ubuntu | 14.04 | upgrade/infernalis-x/parallel/{4-jewel.yaml 0-cluster/{openstack.yaml start.yaml} 1-infernalis-install/infernalis.yaml 2-workload/{blogbench.yaml ec-rados-default.yaml rados_api.yaml rados_loadgenbig.yaml test_rbd_api.yaml test_rbd_python.yaml} 3-upgrade-sequence/upgrade-all.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/ubuntu_14.04.yaml} | 3 | |
Failure Reason:
'wait_until_healthy' reached maximum tries (150) after waiting for 900 seconds |