User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Pass | Fail |
---|---|---|---|---|---|---|---|---|---|---|
sage | 2017-08-03 19:55:44 | 2017-08-03 19:57:13 | 2017-08-03 22:11:21 | 2:14:08 | upgrade:jewel-x | master | smithi | 133e712 | 6 | 10 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
fail | 1479747 | 2017-08-03 19:56:19 | 2017-08-03 19:57:13 | 2017-08-03 20:47:18 | 0:50:05 | 0:46:07 | 0:03:58 | smithi | master | ubuntu | 16.04 | upgrade:jewel-x/parallel/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 1.5-final-scrub.yaml 2-workload/cache-pool-snaps.yaml 3-upgrade-sequence/upgrade-all.yaml 4-luminous.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/ubuntu_latest.yaml} | 3 | |
Failure Reason:
'wait_until_healthy' reached maximum tries (150) after waiting for 900 seconds |
||||||||||||||
fail | 1479748 | 2017-08-03 19:56:20 | 2017-08-03 19:57:13 | 2017-08-03 20:39:13 | 0:42:00 | 0:36:47 | 0:05:13 | smithi | master | centos | 7.3 | upgrade:jewel-x/point-to-point-x/{distros/centos_7.3.yaml point-to-point-upgrade.yaml} | 3 | |
Failure Reason:
Command failed on smithi023 with status 22: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph pg dump --format=json' |
||||||||||||||
pass | 1479749 | 2017-08-03 19:56:21 | 2017-08-03 19:57:13 | 2017-08-03 22:05:15 | 2:08:02 | 2:06:10 | 0:01:52 | smithi | master | centos | 7.3 | upgrade:jewel-x/stress-split/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 1.5-final-scrub.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-workload/{radosbench.yaml rbd-cls.yaml rbd-import-export.yaml rbd_api.yaml readwrite.yaml snaps-few-objects.yaml} 5-finish-upgrade.yaml 6-luminous.yaml 7-final-workload/{rbd-python.yaml rgw-swift.yaml snaps-many-objects.yaml} distros/centos_latest.yaml thrashosds-health.yaml} | 3 | |
pass | 1479750 | 2017-08-03 19:56:21 | 2017-08-03 19:58:32 | 2017-08-03 20:54:33 | 0:56:01 | 0:54:39 | 0:01:22 | smithi | master | centos | 7.3 | upgrade:jewel-x/stress-split-erasure-code/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 1.5-final-scrub.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-workload/ec-rados-default.yaml 5-finish-upgrade.yaml 6-luminous.yaml 7-final-workload/ec-rados-plugin=jerasure-k=3-m=1.yaml distros/centos_latest.yaml thrashosds-health.yaml} | 3 | |
fail | 1479751 | 2017-08-03 19:56:22 | 2017-08-03 19:58:32 | 2017-08-03 21:22:35 | 1:24:03 | 1:14:09 | 0:09:54 | smithi | master | centos | 7.3 | upgrade:jewel-x/parallel/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 1.5-final-scrub.yaml 2-workload/ec-rados-default.yaml 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-luminous.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/centos_latest.yaml} | 3 | |
Failure Reason:
"2017-08-03 20:17:15.638795 mon.b mon.0 172.21.15.24:6789/0 224 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)" in cluster log |
||||||||||||||
fail | 1479752 | 2017-08-03 19:56:23 | 2017-08-03 19:58:32 | 2017-08-03 20:16:31 | 0:17:59 | 0:12:12 | 0:05:47 | smithi | master | ubuntu | 14.04 | upgrade:jewel-x/parallel/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 1.5-final-scrub.yaml 2-workload/rados_api.yaml 3-upgrade-sequence/upgrade-all.yaml 4-luminous.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/ubuntu_14.04.yaml} | 3 | |
Failure Reason:
Command failed (workunit test cls/test_cls_rgw.sh) on smithi044 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=jewel TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_rgw.sh' |
||||||||||||||
fail | 1479753 | 2017-08-03 19:56:24 | 2017-08-03 19:58:32 | 2017-08-03 21:18:35 | 1:20:03 | 1:14:50 | 0:05:13 | smithi | master | ubuntu | 16.04 | upgrade:jewel-x/parallel/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 1.5-final-scrub.yaml 2-workload/rados_loadgenbig.yaml 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-luminous.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/ubuntu_latest.yaml} | 3 | |
Failure Reason:
"2017-08-03 20:08:50.589008 mon.b mon.0 172.21.15.90:6789/0 95 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)" in cluster log |
||||||||||||||
pass | 1479754 | 2017-08-03 19:56:25 | 2017-08-03 19:58:35 | 2017-08-03 22:02:38 | 2:04:03 | 1:58:57 | 0:05:06 | smithi | master | ubuntu | 14.04 | upgrade:jewel-x/stress-split/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 1.5-final-scrub.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-workload/{radosbench.yaml rbd-cls.yaml rbd-import-export.yaml rbd_api.yaml readwrite.yaml snaps-few-objects.yaml} 5-finish-upgrade.yaml 6-luminous.yaml 7-final-workload/{rbd-python.yaml rgw-swift.yaml snaps-many-objects.yaml} distros/ubuntu_14.04.yaml thrashosds-health.yaml} | 3 | |
pass | 1479755 | 2017-08-03 19:56:25 | 2017-08-03 19:58:39 | 2017-08-03 20:58:41 | 1:00:02 | 0:56:40 | 0:03:22 | smithi | master | ubuntu | 14.04 | upgrade:jewel-x/stress-split-erasure-code/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 1.5-final-scrub.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-workload/ec-rados-default.yaml 5-finish-upgrade.yaml 6-luminous.yaml 7-final-workload/ec-rados-plugin=jerasure-k=3-m=1.yaml distros/ubuntu_14.04.yaml thrashosds-health.yaml} | 3 | |
fail | 1479756 | 2017-08-03 19:56:26 | 2017-08-03 19:59:01 | 2017-08-03 21:05:02 | 1:06:01 | 0:58:54 | 0:07:07 | smithi | master | centos | 7.3 | upgrade:jewel-x/parallel/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 1.5-final-scrub.yaml 2-workload/test_rbd_api.yaml 3-upgrade-sequence/upgrade-all.yaml 4-luminous.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/centos_latest.yaml} | 3 | |
Failure Reason:
"2017-08-03 20:11:54.622607 mon.b mon.0 172.21.15.132:6789/0 126 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)" in cluster log |
||||||||||||||
fail | 1479757 | 2017-08-03 19:56:27 | 2017-08-03 20:00:41 | 2017-08-03 20:40:41 | 0:40:00 | 0:35:10 | 0:04:50 | smithi | master | ubuntu | 14.04 | upgrade:jewel-x/point-to-point-x/{distros/ubuntu_14.04.yaml point-to-point-upgrade.yaml} | 3 | |
Failure Reason:
Command failed on smithi067 with status 22: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph pg dump --format=json' |
||||||||||||||
fail | 1479758 | 2017-08-03 19:56:28 | 2017-08-03 20:00:41 | 2017-08-03 21:10:42 | 1:10:01 | 1:04:10 | 0:05:51 | smithi | master | ubuntu | 14.04 | upgrade:jewel-x/parallel/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 1.5-final-scrub.yaml 2-workload/test_rbd_python.yaml 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-luminous.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/ubuntu_14.04.yaml} | 3 | |
Failure Reason:
"2017-08-03 20:10:48.232233 mon.b mon.0 172.21.15.57:6789/0 231 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)" in cluster log |
||||||||||||||
fail | 1479759 | 2017-08-03 19:56:28 | 2017-08-03 20:01:05 | 2017-08-03 21:05:05 | 1:04:00 | 0:59:57 | 0:04:03 | smithi | master | ubuntu | 16.04 | upgrade:jewel-x/parallel/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 1.5-final-scrub.yaml 2-workload/blogbench.yaml 3-upgrade-sequence/upgrade-all.yaml 4-luminous.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/ubuntu_latest.yaml} | 3 | |
Failure Reason:
"2017-08-03 20:10:06.752900 mon.a mon.0 172.21.15.190:6789/0 62 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)" in cluster log |
||||||||||||||
pass | 1479760 | 2017-08-03 19:56:29 | 2017-08-03 20:01:19 | 2017-08-03 22:11:21 | 2:10:02 | 2:02:13 | 0:07:49 | smithi | master | ubuntu | 16.04 | upgrade:jewel-x/stress-split/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 1.5-final-scrub.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-workload/{radosbench.yaml rbd-cls.yaml rbd-import-export.yaml rbd_api.yaml readwrite.yaml snaps-few-objects.yaml} 5-finish-upgrade.yaml 6-luminous.yaml 7-final-workload/{rbd-python.yaml rgw-swift.yaml snaps-many-objects.yaml} distros/ubuntu_latest.yaml thrashosds-health.yaml} | 3 | |
pass | 1479761 | 2017-08-03 19:56:30 | 2017-08-03 20:01:55 | 2017-08-03 20:57:55 | 0:56:00 | 0:52:50 | 0:03:10 | smithi | master | ubuntu | 16.04 | upgrade:jewel-x/stress-split-erasure-code/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 1.5-final-scrub.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-workload/ec-rados-default.yaml 5-finish-upgrade.yaml 6-luminous.yaml 7-final-workload/ec-rados-plugin=jerasure-k=3-m=1.yaml distros/ubuntu_latest.yaml thrashosds-health.yaml} | 3 | |
fail | 1479762 | 2017-08-03 19:56:31 | 2017-08-03 20:02:21 | 2017-08-03 20:54:21 | 0:52:00 | 0:47:00 | 0:05:00 | smithi | master | centos | 7.3 | upgrade:jewel-x/parallel/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 1.5-final-scrub.yaml 2-workload/cache-pool-snaps.yaml 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-luminous.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/centos_latest.yaml} | 3 | |
Failure Reason:
'wait_until_healthy' reached maximum tries (150) after waiting for 900 seconds |