User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Pass | Fail |
---|---|---|---|---|---|---|---|---|---|---|
teuthology | 2016-10-08 04:20:09 | 2016-10-08 06:13:33 | 2016-10-08 10:02:09 | 3:48:36 | upgrade:jewel-x | master | vps | a4ce1f5 | 2 | 9 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
pass | 461814 | 2016-10-08 04:20:54 | 2016-10-08 06:00:03 | 2016-10-08 10:02:09 | 4:02:06 | 2:52:39 | 1:09:27 | vps | master | centos | 7.2 | upgrade:jewel-x/parallel/{kraken.yaml 0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 2-workload/{blogbench.yaml ec-rados-default.yaml rados_api.yaml rados_loadgenbig.yaml test_rbd_api.yaml test_rbd_python.yaml} 3-upgrade-sequence/upgrade-all.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/centos_7.2.yaml} | 3 | |
fail | 461815 | 2016-10-08 04:20:55 | 2016-10-08 06:00:03 | 2016-10-08 07:56:05 | 1:56:02 | 0:44:15 | 1:11:47 | vps | master | centos | 7.2 | upgrade:jewel-x/point-to-point-x/{point-to-point-upgrade.yaml distros/centos_7.2.yaml} | 3 | |
Failure Reason:
Command failed (workunit test cls/test_cls_rbd.sh) on vpm013 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && cd -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=jewel TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="1" PATH=$PATH:/usr/sbin adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/workunit.client.1/cls/test_cls_rbd.sh' |
||||||||||||||
fail | 461816 | 2016-10-08 04:20:56 | 2016-10-08 06:01:38 | 2016-10-08 07:29:40 | 1:28:02 | 0:36:55 | 0:51:07 | vps | master | centos | 7.2 | upgrade:jewel-x/stress-split/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-mon/mona.yaml 5-workload/{rbd-cls.yaml rbd-import-export.yaml readwrite.yaml snaps-few-objects.yaml} 6-next-mon/monb.yaml 7-workload/{radosbench.yaml rbd_api.yaml} 8-next-mon/monc.yaml 9-workload/{rbd-python.yaml rgw-swift.yaml snaps-many-objects.yaml} distros/centos_7.2.yaml} | 3 | |
Failure Reason:
Command failed on vpm039 with status 1: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph --cluster ceph osd dump --format=json' |
||||||||||||||
fail | 461817 | 2016-10-08 04:20:56 | 2016-10-08 06:01:38 | 2016-10-08 07:37:40 | 1:36:02 | 0:25:55 | 1:10:07 | vps | master | centos | 7.2 | upgrade:jewel-x/stress-split-erasure-code/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-mon/mona.yaml 5-workload/ec-rados-default.yaml 6-next-mon/monb.yaml 8-next-mon/monc.yaml 9-workload/ec-rados-plugin=jerasure-k=3-m=1.yaml distros/centos_7.2.yaml} | 3 | |
Failure Reason:
Command failed on vpm063 with status 1: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph --cluster ceph osd dump --format=json' |
||||||||||||||
fail | 461818 | 2016-10-08 04:20:57 | 2016-10-08 06:07:14 | 2016-10-08 07:05:15 | 0:58:01 | 0:32:35 | 0:25:26 | vps | master | upgrade:jewel-x/stress-split-erasure-code-x86_64/{0-x86_64.yaml 0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-mon/mona.yaml 5-workload/ec-rados-default.yaml 6-next-mon/monb.yaml 8-next-mon/monc.yaml 9-workload/ec-rados-plugin=jerasure-k=3-m=1.yaml} | 3 | |||
Failure Reason:
Command failed on vpm193 with status 1: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph --cluster ceph osd dump --format=json' |
||||||||||||||
fail | 461819 | 2016-10-08 04:20:58 | 2016-10-08 06:09:27 | 2016-10-08 08:17:30 | 2:08:03 | 1:28:30 | 0:39:33 | vps | master | ubuntu | 14.04 | upgrade:jewel-x/parallel/{kraken.yaml 0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 2-workload/{blogbench.yaml ec-rados-default.yaml rados_api.yaml rados_loadgenbig.yaml test_rbd_api.yaml test_rbd_python.yaml} 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/ubuntu_14.04.yaml} | 3 | |
Failure Reason:
Command failed on vpm135 with status 1: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph --cluster ceph osd dump --format=json' |
||||||||||||||
fail | 461820 | 2016-10-08 04:20:59 | 2016-10-08 06:13:24 | 2016-10-08 07:17:25 | 1:04:01 | 0:47:51 | 0:16:10 | vps | master | centos | 7.2 | upgrade:jewel-x/parallel/{kraken.yaml 0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 2-workload/{blogbench.yaml ec-rados-default.yaml rados_api.yaml rados_loadgenbig.yaml test_rbd_api.yaml test_rbd_python.yaml} 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/centos_7.2.yaml} | 3 | |
Failure Reason:
Command failed on vpm125 with status 1: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph --cluster ceph osd dump --format=json' |
||||||||||||||
fail | 461821 | 2016-10-08 04:21:00 | 2016-10-08 06:13:24 | 2016-10-08 07:25:25 | 1:12:01 | 0:57:21 | 0:14:40 | vps | master | ubuntu | 14.04 | upgrade:jewel-x/point-to-point-x/{point-to-point-upgrade.yaml distros/ubuntu_14.04.yaml} | 3 | |
Failure Reason:
Command failed (workunit test cls/test_cls_rbd.sh) on vpm181 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && cd -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=jewel TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="1" PATH=$PATH:/usr/sbin adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/workunit.client.1/cls/test_cls_rbd.sh' |
||||||||||||||
fail | 461822 | 2016-10-08 04:21:01 | 2016-10-08 06:13:33 | 2016-10-08 06:35:33 | 0:22:00 | 0:12:05 | 0:09:55 | vps | master | ubuntu | 14.04 | upgrade:jewel-x/stress-split/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-mon/mona.yaml 5-workload/{rbd-cls.yaml rbd-import-export.yaml readwrite.yaml snaps-few-objects.yaml} 6-next-mon/monb.yaml 7-workload/{radosbench.yaml rbd_api.yaml} 8-next-mon/monc.yaml 9-workload/{rbd-python.yaml rgw-swift.yaml snaps-many-objects.yaml} distros/ubuntu_14.04.yaml} | 3 | |
Failure Reason:
{'vpm143.front.sepia.ceph.com': {'_ansible_parsed': False, '_ansible_no_log': False, 'module_stderr': 'Traceback (most recent call last):\n File "/tmp/ansible_twn8Is/ansible_module_apt.py", line 843, in <module>\n main()\n File "/tmp/ansible_twn8Is/ansible_module_apt.py", line 735, in main\n cache = apt.Cache()\n File "/usr/lib/python2.7/dist-packages/apt/cache.py", line 107, in __init__\n self.open(progress)\n File "/usr/lib/python2.7/dist-packages/apt/cache.py", line 151, in open\n self._cache = apt_pkg.Cache(progress)\nSystemError: E:Problem renaming the file /var/cache/apt/pkgcache.bin.oCAdqZ to /var/cache/apt/pkgcache.bin - rename (2: No such file or directory), W:You may want to run apt-get update to correct these problems\n', 'changed': False, 'module_stdout': '', 'failed': True, 'invocation': {'module_name': 'apt'}, 'msg': 'MODULE FAILURE'}} |
||||||||||||||
fail | 461823 | 2016-10-08 04:21:01 | 2016-10-08 06:15:36 | 2016-10-08 07:25:37 | 1:10:01 | 0:33:20 | 0:36:41 | vps | master | ubuntu | 14.04 | upgrade:jewel-x/stress-split-erasure-code/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-mon/mona.yaml 5-workload/ec-rados-default.yaml 6-next-mon/monb.yaml 8-next-mon/monc.yaml 9-workload/ec-rados-plugin=jerasure-k=3-m=1.yaml distros/ubuntu_14.04.yaml} | 3 | |
Failure Reason:
Command failed on vpm171 with status 1: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph --cluster ceph osd dump --format=json' |
||||||||||||||
pass | 461824 | 2016-10-08 04:21:02 | 2016-10-08 06:15:59 | 2016-10-08 09:16:04 | 3:00:05 | 2:33:22 | 0:26:43 | vps | master | ubuntu | 14.04 | upgrade:jewel-x/parallel/{kraken.yaml 0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 2-workload/{blogbench.yaml ec-rados-default.yaml rados_api.yaml rados_loadgenbig.yaml test_rbd_api.yaml test_rbd_python.yaml} 3-upgrade-sequence/upgrade-all.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/ubuntu_14.04.yaml} | 3 |