User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Pass | Fail | Dead |
---|---|---|---|---|---|---|---|---|---|---|---|
teuthology | 2017-11-26 04:20:02 | 2017-11-26 04:20:28 | 2017-11-26 12:22:37 | 8:02:09 | upgrade:jewel-x | kraken | vps | ad30823 | 19 | 11 | 2 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
pass | 1891286 | 2017-11-26 04:20:23 | 2017-11-26 04:20:28 | 2017-11-26 06:26:30 | 2:06:02 | 1:23:42 | 0:42:20 | vps | master | centos | 7.3 | upgrade:jewel-x/parallel/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 2-workload/blogbench.yaml 3-upgrade-sequence/upgrade-all.yaml 4-kraken.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/centos_7.3.yaml} | 3 | |
fail | 1891288 | 2017-11-26 04:20:24 | 2017-11-26 04:20:28 | 2017-11-26 11:00:36 | 6:40:08 | 0:55:58 | 5:44:10 | vps | master | centos | 7.3 | upgrade:jewel-x/point-to-point-x/{distros/centos_latest.yaml point-to-point-upgrade.yaml} | 3 | |
Failure Reason:
Command failed (workunit test cls/test_cls_refcount.sh) on vpm007 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && cd -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=jewel TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="1" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.1 CLS_RBD_GTEST_FILTER=\'*:-TestClsRbd.mirror_image\' adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.1/qa/workunits/cls/test_cls_refcount.sh' |
||||||||||||||
pass | 1891290 | 2017-11-26 04:20:25 | 2017-11-26 04:20:28 | 2017-11-26 07:56:32 | 3:36:04 | 2:37:32 | 0:58:32 | vps | master | centos | 7.3 | upgrade:jewel-x/stress-split/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-workload/{radosbench.yaml rbd-cls.yaml rbd-import-export.yaml rbd_api.yaml readwrite.yaml snaps-few-objects.yaml} 5-finish-upgrade.yaml 6-kraken.yaml 7-final-workload/{rbd-python.yaml rgw-swift.yaml snaps-many-objects.yaml} distros/centos_latest.yaml} | 3 | |
pass | 1891292 | 2017-11-26 04:20:25 | 2017-11-26 04:20:29 | 2017-11-26 07:06:31 | 2:46:02 | 1:06:42 | 1:39:20 | vps | master | centos | 7.3 | upgrade:jewel-x/stress-split-erasure-code/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-workload/ec-rados-default.yaml 5-finish-upgrade.yaml 6-kraken.yaml 7-final-workload/ec-rados-plugin=jerasure-k=3-m=1.yaml distros/centos_7.3.yaml} | 3 | |
fail | 1891295 | 2017-11-26 04:20:26 | 2017-11-26 04:20:29 | 2017-11-26 08:18:33 | 3:58:04 | 0:44:21 | 3:13:43 | vps | master | ubuntu | 16.04 | upgrade:jewel-x/parallel/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 2-workload/ec-rados-default.yaml 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-kraken.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/ubuntu_16.04.yaml} | 3 | |
Failure Reason:
failed to complete snap trimming before timeout |
||||||||||||||
fail | 1891297 | 2017-11-26 04:20:27 | 2017-11-26 04:20:29 | 2017-11-26 05:38:29 | 1:18:00 | 0:20:50 | 0:57:10 | vps | master | centos | 7.3 | upgrade:jewel-x/parallel/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 2-workload/rados_api.yaml 3-upgrade-sequence/upgrade-all.yaml 4-kraken.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/centos_7.3.yaml} | 3 | |
Failure Reason:
Command failed (workunit test cls/test_cls_refcount.sh) on vpm033 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=jewel TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_refcount.sh' |
||||||||||||||
fail | 1891299 | 2017-11-26 04:20:27 | 2017-11-26 04:20:29 | 2017-11-26 08:46:33 | 4:26:04 | vps | master | ubuntu | 16.04 | upgrade:jewel-x/parallel/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 2-workload/rados_loadgenbig.yaml 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-kraken.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/ubuntu_16.04.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@vpm089.front.sepia.ceph.com |
||||||||||||||
pass | 1891301 | 2017-11-26 04:20:28 | 2017-11-26 04:20:29 | 2017-11-26 09:22:35 | 5:02:06 | 2:05:06 | 2:57:00 | vps | master | centos | 7.3 | upgrade:jewel-x/parallel/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 2-workload/test_rbd_api.yaml 3-upgrade-sequence/upgrade-all.yaml 4-kraken.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/centos_7.3.yaml} | 3 | |
pass | 1891303 | 2017-11-26 04:20:29 | 2017-11-26 04:20:30 | 2017-11-26 06:30:32 | 2:10:02 | 1:19:49 | 0:50:13 | vps | master | ubuntu | 16.04 | upgrade:jewel-x/parallel/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 2-workload/test_rbd_python.yaml 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-kraken.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/ubuntu_16.04.yaml} | 3 | |
fail | 1891305 | 2017-11-26 04:20:29 | 2017-11-26 04:20:32 | 2017-11-26 05:08:32 | 0:48:00 | vps | master | ubuntu | 16.04 | upgrade:jewel-x/parallel/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 2-workload/blogbench.yaml 3-upgrade-sequence/upgrade-all.yaml 4-kraken.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/ubuntu_16.04.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@vpm143.front.sepia.ceph.com |
||||||||||||||
pass | 1891307 | 2017-11-26 04:20:30 | 2017-11-26 04:20:31 | 2017-11-26 06:40:33 | 2:20:02 | 1:34:48 | 0:45:14 | vps | master | centos | 7.3 | upgrade:jewel-x/parallel/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 2-workload/ec-rados-default.yaml 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-kraken.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/centos_7.3.yaml} | 3 | |
fail | 1891309 | 2017-11-26 04:20:31 | 2017-11-26 04:20:32 | 2017-11-26 06:20:33 | 2:00:01 | 0:07:51 | 1:52:10 | vps | master | ubuntu | 16.04 | upgrade:jewel-x/parallel/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 2-workload/rados_api.yaml 3-upgrade-sequence/upgrade-all.yaml 4-kraken.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/ubuntu_16.04.yaml} | 3 | |
Failure Reason:
{'vpm085.front.sepia.ceph.com': {'_ansible_parsed': True, 'stderr_lines': ['E: Could not get lock /var/lib/dpkg/lock - open (11: Resource temporarily unavailable)', 'E: Unable to lock the administration directory (/var/lib/dpkg/), is another process using it?'], 'cmd': 'apt-get install python-apt -y -q', '_ansible_no_log': False, 'stdout': '', 'changed': False, 'failed': True, 'stderr': 'E: Could not get lock /var/lib/dpkg/lock - open (11: Resource temporarily unavailable)\nE: Unable to lock the administration directory (/var/lib/dpkg/), is another process using it?\n', 'rc': 100, 'invocation': {'module_args': {'autoremove': False, 'force': False, 'force_apt_get': False, 'update_cache': True, 'only_upgrade': False, 'deb': None, 'cache_valid_time': 0, 'dpkg_options': 'force-confdef,force-confold', 'upgrade': None, 'package': None, 'autoclean': False, 'purge': False, 'allow_unauthenticated': False, 'state': 'present', 'default_release': None, 'install_recommends': None}}, 'stdout_lines': [], 'msg': 'E: Could not get lock /var/lib/dpkg/lock - open (11: Resource temporarily unavailable)\nE: Unable to lock the administration directory (/var/lib/dpkg/), is another process using it?'}} |
||||||||||||||
fail | 1891311 | 2017-11-26 04:20:31 | 2017-11-26 04:40:31 | 2017-11-26 10:46:38 | 6:06:07 | 0:55:24 | 5:10:43 | vps | master | ubuntu | 14.04 | upgrade:jewel-x/point-to-point-x/{distros/ubuntu_14.04.yaml point-to-point-upgrade.yaml} | 3 | |
Failure Reason:
Command failed (workunit test cls/test_cls_refcount.sh) on vpm069 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && cd -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=jewel TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="1" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.1 CLS_RBD_GTEST_FILTER=\'*:-TestClsRbd.mirror_image\' adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.1/qa/workunits/cls/test_cls_refcount.sh' |
||||||||||||||
pass | 1891313 | 2017-11-26 04:20:32 | 2017-11-26 04:48:39 | 2017-11-26 09:58:45 | 5:10:06 | 2:26:55 | 2:43:11 | vps | master | ubuntu | 14.04 | upgrade:jewel-x/stress-split/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-workload/{radosbench.yaml rbd-cls.yaml rbd-import-export.yaml rbd_api.yaml readwrite.yaml snaps-few-objects.yaml} 5-finish-upgrade.yaml 6-kraken.yaml 7-final-workload/{rbd-python.yaml rgw-swift.yaml snaps-many-objects.yaml} distros/ubuntu_14.04.yaml} | 3 | |
fail | 1891315 | 2017-11-26 04:20:33 | 2017-11-26 04:54:05 | 2017-11-26 06:32:07 | 1:38:02 | 0:44:04 | 0:53:58 | vps | master | centos | 7.3 | upgrade:jewel-x/parallel/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 2-workload/rados_loadgenbig.yaml 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-kraken.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/centos_7.3.yaml} | 3 | |
Failure Reason:
'wait_until_healthy' reached maximum tries (150) after waiting for 900 seconds |
||||||||||||||
pass | 1891317 | 2017-11-26 04:20:33 | 2017-11-26 04:56:03 | 2017-11-26 06:42:05 | 1:46:02 | 1:38:42 | 0:07:20 | vps | master | ubuntu | 16.04 | upgrade:jewel-x/parallel/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 2-workload/test_rbd_api.yaml 3-upgrade-sequence/upgrade-all.yaml 4-kraken.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/ubuntu_16.04.yaml} | 3 | |
pass | 1891319 | 2017-11-26 04:20:34 | 2017-11-26 04:56:03 | 2017-11-26 06:48:05 | 1:52:02 | 1:25:48 | 0:26:14 | vps | master | centos | 7.3 | upgrade:jewel-x/parallel/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 2-workload/test_rbd_python.yaml 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-kraken.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/centos_7.3.yaml} | 3 | |
pass | 1891321 | 2017-11-26 04:20:35 | 2017-11-26 04:56:03 | 2017-11-26 08:06:06 | 3:10:03 | 1:29:00 | 1:41:03 | vps | master | centos | 7.3 | upgrade:jewel-x/parallel/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 2-workload/blogbench.yaml 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-kraken.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/centos_7.3.yaml} | 3 | |
fail | 1891323 | 2017-11-26 04:20:35 | 2017-11-26 04:58:03 | 2017-11-26 08:54:08 | 3:56:05 | 1:09:44 | 2:46:21 | vps | master | ubuntu | 16.04 | upgrade:jewel-x/stress-split-erasure-code/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-workload/ec-rados-default.yaml 5-finish-upgrade.yaml 6-kraken.yaml 7-final-workload/ec-rados-plugin=jerasure-k=3-m=1.yaml distros/ubuntu_16.04.yaml} | 3 | |
Failure Reason:
failed to complete snap trimming before timeout |
||||||||||||||
pass | 1891325 | 2017-11-26 04:20:36 | 2017-11-26 04:58:03 | 2017-11-26 08:40:07 | 3:42:04 | 1:25:50 | 2:16:14 | vps | master | ubuntu | 16.04 | upgrade:jewel-x/parallel/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 2-workload/ec-rados-default.yaml 3-upgrade-sequence/upgrade-all.yaml 4-kraken.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/ubuntu_16.04.yaml} | 3 | |
pass | 1891327 | 2017-11-26 04:20:37 | 2017-11-26 04:58:03 | 2017-11-26 06:44:04 | 1:46:01 | 1:22:28 | 0:23:33 | vps | master | centos | 7.3 | upgrade:jewel-x/parallel/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 2-workload/rados_api.yaml 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-kraken.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/centos_7.3.yaml} | 3 | |
pass | 1891329 | 2017-11-26 04:20:37 | 2017-11-26 06:48:21 | 2017-11-26 11:10:26 | 4:22:05 | 1:40:11 | 2:41:54 | vps | master | ubuntu | 16.04 | upgrade:jewel-x/parallel/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 2-workload/rados_loadgenbig.yaml 3-upgrade-sequence/upgrade-all.yaml 4-kraken.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/ubuntu_16.04.yaml} | 3 | |
pass | 1891332 | 2017-11-26 04:20:38 | 2017-11-26 06:52:12 | 2017-11-26 08:36:15 | 1:44:03 | 1:22:26 | 0:21:37 | vps | master | centos | 7.3 | upgrade:jewel-x/parallel/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 2-workload/test_rbd_api.yaml 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-kraken.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/centos_7.3.yaml} | 3 | |
fail | 1891334 | 2017-11-26 04:20:39 | 2017-11-26 06:56:12 | 2017-11-26 09:10:14 | 2:14:02 | 0:58:09 | 1:15:53 | vps | master | ubuntu | 16.04 | upgrade:jewel-x/point-to-point-x/{distros/ubuntu_latest.yaml point-to-point-upgrade.yaml} | 3 | |
Failure Reason:
Command failed (workunit test cls/test_cls_refcount.sh) on vpm017 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && cd -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=jewel TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="1" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.1 CLS_RBD_GTEST_FILTER=\'*:-TestClsRbd.mirror_image\' adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.1/qa/workunits/cls/test_cls_refcount.sh' |
||||||||||||||
pass | 1891336 | 2017-11-26 04:20:39 | 2017-11-26 07:02:19 | 2017-11-26 09:32:22 | 2:30:03 | 2:17:12 | 0:12:51 | vps | master | ubuntu | 16.04 | upgrade:jewel-x/stress-split/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-workload/{radosbench.yaml rbd-cls.yaml rbd-import-export.yaml rbd_api.yaml readwrite.yaml snaps-few-objects.yaml} 5-finish-upgrade.yaml 6-kraken.yaml 7-final-workload/{rbd-python.yaml rgw-swift.yaml snaps-many-objects.yaml} distros/ubuntu_latest.yaml} | 3 | |
dead | 1891338 | 2017-11-26 04:20:40 | 2017-11-26 07:06:44 | 2017-11-26 09:28:46 | 2:22:02 | 1:26:57 | 0:55:05 | vps | master | ubuntu | 16.04 | upgrade:jewel-x/parallel/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 2-workload/test_rbd_python.yaml 3-upgrade-sequence/upgrade-all.yaml 4-kraken.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/ubuntu_16.04.yaml} | 3 | |
Failure Reason:
SSH connection to vpm121 was lost: 'rm -rf -- /home/ubuntu/cephtest/workunits.list.client.1 /home/ubuntu/cephtest/clone.client.1' |
||||||||||||||
pass | 1891340 | 2017-11-26 04:20:41 | 2017-11-26 07:06:47 | 2017-11-26 08:50:49 | 1:44:02 | 1:20:27 | 0:23:35 | vps | master | ubuntu | 16.04 | upgrade:jewel-x/parallel/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 2-workload/blogbench.yaml 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-kraken.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/ubuntu_16.04.yaml} | 3 | |
fail | 1891342 | 2017-11-26 04:20:41 | 2017-11-26 07:08:56 | 2017-11-26 10:53:00 | 3:44:04 | 0:50:46 | 2:53:18 | vps | master | centos | 7.3 | upgrade:jewel-x/parallel/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 2-workload/ec-rados-default.yaml 3-upgrade-sequence/upgrade-all.yaml 4-kraken.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/centos_7.3.yaml} | 3 | |
Failure Reason:
failed to complete snap trimming before timeout |
||||||||||||||
pass | 1891344 | 2017-11-26 04:20:42 | 2017-11-26 07:12:00 | 2017-11-26 08:58:02 | 1:46:02 | 1:13:27 | 0:32:35 | vps | master | ubuntu | 16.04 | upgrade:jewel-x/parallel/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 2-workload/rados_api.yaml 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-kraken.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/ubuntu_16.04.yaml} | 3 | |
pass | 1891346 | 2017-11-26 04:20:43 | 2017-11-26 07:14:58 | 2017-11-26 10:59:02 | 3:44:04 | 1:40:11 | 2:03:53 | vps | master | centos | 7.3 | upgrade:jewel-x/parallel/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 2-workload/rados_loadgenbig.yaml 3-upgrade-sequence/upgrade-all.yaml 4-kraken.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/centos_7.3.yaml} | 3 | |
pass | 1891348 | 2017-11-26 04:20:43 | 2017-11-26 07:18:33 | 2017-11-26 09:42:35 | 2:24:02 | 1:17:33 | 1:06:29 | vps | master | ubuntu | 16.04 | upgrade:jewel-x/parallel/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 2-workload/test_rbd_api.yaml 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-kraken.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/ubuntu_16.04.yaml} | 3 | |
dead | 1891349 | 2017-11-26 04:20:44 | 2017-11-26 07:24:28 | 2017-11-26 12:22:37 | 4:58:09 | 1:42:39 | 3:15:30 | vps | master | centos | 7.3 | upgrade:jewel-x/parallel/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 2-workload/test_rbd_python.yaml 3-upgrade-sequence/upgrade-all.yaml 4-kraken.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/centos_7.3.yaml} | 3 | |
Failure Reason:
SSH connection to vpm127 was lost: "sudo find /var/log/ceph -name '*.log' -print0 | sudo xargs -0 --no-run-if-empty -- gzip --" |