User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Pass | Fail | Dead |
---|---|---|---|---|---|---|---|---|---|---|---|
teuthology | 2017-08-04 04:23:02 | 2017-08-04 14:23:32 | 2017-08-05 05:10:12 | 14:46:40 | upgrade:jewel-x | master | ovh | 47480d8 | 6 | 38 | 6 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
dead | 1481366 | 2017-08-04 04:23:41 | 2017-08-04 14:19:35 | 2017-08-05 02:22:12 | 12:02:37 | ovh | master | centos | 7.3 | upgrade:jewel-x/parallel/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 1.5-final-scrub.yaml 2-workload/blogbench.yaml 3-upgrade-sequence/upgrade-all.yaml 4-luminous.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/centos_latest.yaml} | 3 | |||
fail | 1481369 | 2017-08-04 04:23:41 | 2017-08-04 14:22:10 | 2017-08-04 16:44:12 | 2:22:02 | 0:41:43 | 1:40:19 | ovh | master | centos | 7.3 | upgrade:jewel-x/point-to-point-x/{distros/centos_7.3.yaml point-to-point-upgrade.yaml} | 3 | |
Failure Reason:
Command failed on ovh067 with status 22: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph pg dump --format=json' |
||||||||||||||
pass | 1481372 | 2017-08-04 04:23:42 | 2017-08-04 14:23:32 | 2017-08-04 17:01:34 | 2:38:02 | 2:13:08 | 0:24:54 | ovh | master | centos | 7.3 | upgrade:jewel-x/stress-split/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 1.5-final-scrub.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-workload/{radosbench.yaml rbd-cls.yaml rbd-import-export.yaml rbd_api.yaml readwrite.yaml snaps-few-objects.yaml} 5-finish-upgrade.yaml 6-luminous.yaml 7-final-workload/{rbd-python.yaml rgw-swift.yaml snaps-many-objects.yaml} distros/centos_latest.yaml thrashosds-health.yaml} | 3 | |
pass | 1481375 | 2017-08-04 04:23:43 | 2017-08-04 14:31:52 | 2017-08-04 16:33:53 | 2:02:01 | 0:56:51 | 1:05:10 | ovh | master | centos | 7.3 | upgrade:jewel-x/stress-split-erasure-code/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 1.5-final-scrub.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-workload/ec-rados-default.yaml 5-finish-upgrade.yaml 6-luminous.yaml 7-final-workload/ec-rados-plugin=jerasure-k=3-m=1.yaml distros/centos_latest.yaml thrashosds-health.yaml} | 3 | |
fail | 1481378 | 2017-08-04 04:23:43 | 2017-08-04 14:38:38 | 2017-08-04 16:30:40 | 1:52:02 | 0:16:18 | 1:35:44 | ovh | master | ubuntu | 14.04 | upgrade:jewel-x/parallel/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 1.5-final-scrub.yaml 2-workload/cache-pool-snaps.yaml 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-luminous.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/ubuntu_14.04.yaml} | 3 | |
Failure Reason:
Command failed on ovh021 with status 22: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph pg dump --format=json' |
||||||||||||||
fail | 1481382 | 2017-08-04 04:23:44 | 2017-08-04 14:39:25 | 2017-08-04 16:57:27 | 2:18:02 | 1:25:19 | 0:52:43 | ovh | master | ubuntu | 16.04 | upgrade:jewel-x/parallel/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 1.5-final-scrub.yaml 2-workload/ec-rados-default.yaml 3-upgrade-sequence/upgrade-all.yaml 4-luminous.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/ubuntu_latest.yaml} | 3 | |
Failure Reason:
"2017-08-04 15:43:46.129266 mon.a mon.0 158.69.69.60:6789/0 173 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)" in cluster log |
||||||||||||||
fail | 1481385 | 2017-08-04 04:23:45 | 2017-08-04 14:41:26 | 2017-08-04 15:37:26 | 0:56:00 | 0:15:00 | 0:41:00 | ovh | master | centos | 7.3 | upgrade:jewel-x/parallel/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 1.5-final-scrub.yaml 2-workload/rados_api.yaml 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-luminous.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/centos_latest.yaml} | 3 | |
Failure Reason:
Command failed on ovh031 with status 22: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph pg dump --format=json' |
||||||||||||||
fail | 1481388 | 2017-08-04 04:23:46 | 2017-08-04 14:41:45 | 2017-08-04 17:49:49 | 3:08:04 | 2:25:50 | 0:42:14 | ovh | master | ubuntu | 14.04 | upgrade:jewel-x/parallel/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 1.5-final-scrub.yaml 2-workload/rados_loadgenbig.yaml 3-upgrade-sequence/upgrade-all.yaml 4-luminous.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/ubuntu_14.04.yaml} | 3 | |
Failure Reason:
"2017-08-04 15:38:58.200529 mon.b mon.0 158.69.69.28:6789/0 59 : cluster [WRN] Health check failed: 3 osds down (OSD_DOWN)" in cluster log |
||||||||||||||
fail | 1481390 | 2017-08-04 04:23:46 | 2017-08-04 14:46:52 | 2017-08-04 16:48:53 | 2:02:01 | 1:11:38 | 0:50:23 | ovh | master | ubuntu | 16.04 | upgrade:jewel-x/parallel/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 1.5-final-scrub.yaml 2-workload/test_rbd_api.yaml 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-luminous.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/ubuntu_latest.yaml} | 3 | |
Failure Reason:
"2017-08-04 15:53:31.593213 mon.b mon.0 158.69.69.50:6789/0 165 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)" in cluster log |
||||||||||||||
fail | 1481393 | 2017-08-04 04:23:47 | 2017-08-04 14:46:58 | 2017-08-04 18:05:02 | 3:18:04 | 1:12:43 | 2:05:21 | ovh | master | centos | 7.3 | upgrade:jewel-x/parallel/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 1.5-final-scrub.yaml 2-workload/test_rbd_python.yaml 3-upgrade-sequence/upgrade-all.yaml 4-luminous.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/centos_latest.yaml} | 3 | |
Failure Reason:
"2017-08-04 17:03:27.175846 mon.b mon.0 158.69.71.21:6789/0 116 : cluster [WRN] Health check failed: 2 osds down (OSD_DOWN)" in cluster log |
||||||||||||||
dead | 1481395 | 2017-08-04 04:23:48 | 2017-08-04 14:53:51 | 2017-08-05 02:56:25 | 12:02:34 | ovh | master | ubuntu | 14.04 | upgrade:jewel-x/parallel/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 1.5-final-scrub.yaml 2-workload/blogbench.yaml 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-luminous.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/ubuntu_14.04.yaml} | 3 | |||
fail | 1481398 | 2017-08-04 04:23:48 | 2017-08-04 14:54:36 | 2017-08-04 16:16:36 | 1:22:00 | 0:22:49 | 0:59:11 | ovh | master | ubuntu | 16.04 | upgrade:jewel-x/parallel/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 1.5-final-scrub.yaml 2-workload/cache-pool-snaps.yaml 3-upgrade-sequence/upgrade-all.yaml 4-luminous.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/ubuntu_latest.yaml} | 3 | |
Failure Reason:
reached maximum tries (50) after waiting for 300 seconds |
||||||||||||||
fail | 1481401 | 2017-08-04 04:23:49 | 2017-08-04 15:04:28 | 2017-08-04 16:36:29 | 1:32:01 | 0:18:46 | 1:13:15 | ovh | master | centos | 7.3 | upgrade:jewel-x/parallel/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 1.5-final-scrub.yaml 2-workload/ec-rados-default.yaml 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-luminous.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/centos_latest.yaml} | 3 | |
Failure Reason:
Command failed on ovh083 with status 22: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph pg dump --format=json' |
||||||||||||||
fail | 1481404 | 2017-08-04 04:23:50 | 2017-08-04 15:08:20 | 2017-08-04 16:00:21 | 0:52:01 | 0:19:30 | 0:32:31 | ovh | master | ubuntu | 14.04 | upgrade:jewel-x/parallel/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 1.5-final-scrub.yaml 2-workload/rados_api.yaml 3-upgrade-sequence/upgrade-all.yaml 4-luminous.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/ubuntu_14.04.yaml} | 3 | |
Failure Reason:
Command failed (workunit test cls/test_cls_rgw.sh) on ovh016 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=jewel TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_rgw.sh' |
||||||||||||||
fail | 1481407 | 2017-08-04 04:23:50 | 2017-08-04 15:21:30 | 2017-08-04 17:07:32 | 1:46:02 | 0:13:29 | 1:32:33 | ovh | master | ubuntu | 16.04 | upgrade:jewel-x/parallel/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 1.5-final-scrub.yaml 2-workload/rados_loadgenbig.yaml 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-luminous.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/ubuntu_latest.yaml} | 3 | |
Failure Reason:
Command failed on ovh081 with status 22: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph pg dump --format=json' |
||||||||||||||
fail | 1481410 | 2017-08-04 04:23:51 | 2017-08-04 15:23:11 | 2017-08-04 17:21:12 | 1:58:01 | 1:12:23 | 0:45:38 | ovh | master | centos | 7.3 | upgrade:jewel-x/parallel/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 1.5-final-scrub.yaml 2-workload/test_rbd_api.yaml 3-upgrade-sequence/upgrade-all.yaml 4-luminous.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/centos_latest.yaml} | 3 | |
Failure Reason:
"2017-08-04 16:20:59.696927 mon.a mon.0 158.69.70.36:6789/0 162 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)" in cluster log |
||||||||||||||
fail | 1481413 | 2017-08-04 04:23:52 | 2017-08-04 15:31:53 | 2017-08-04 19:03:57 | 3:32:04 | 1:13:11 | 2:18:53 | ovh | master | ubuntu | 14.04 | upgrade:jewel-x/parallel/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 1.5-final-scrub.yaml 2-workload/test_rbd_python.yaml 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-luminous.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/ubuntu_14.04.yaml} | 3 | |
Failure Reason:
"2017-08-04 18:03:35.918029 mon.b mon.0 158.69.72.109:6789/0 256 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)" in cluster log |
||||||||||||||
dead | 1481416 | 2017-08-04 04:23:52 | 2017-08-04 15:37:29 | 2017-08-05 03:40:08 | 12:02:39 | ovh | master | ubuntu | 16.04 | upgrade:jewel-x/parallel/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 1.5-final-scrub.yaml 2-workload/blogbench.yaml 3-upgrade-sequence/upgrade-all.yaml 4-luminous.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/ubuntu_latest.yaml} | 3 | |||
pass | 1481419 | 2017-08-04 04:23:53 | 2017-08-04 15:37:52 | 2017-08-04 20:03:57 | 4:26:05 | 2:11:38 | 2:14:27 | ovh | master | ubuntu | 14.04 | upgrade:jewel-x/stress-split/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 1.5-final-scrub.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-workload/{radosbench.yaml rbd-cls.yaml rbd-import-export.yaml rbd_api.yaml readwrite.yaml snaps-few-objects.yaml} 5-finish-upgrade.yaml 6-luminous.yaml 7-final-workload/{rbd-python.yaml rgw-swift.yaml snaps-many-objects.yaml} distros/ubuntu_14.04.yaml thrashosds-health.yaml} | 3 | |
pass | 1481422 | 2017-08-04 04:23:54 | 2017-08-04 15:40:31 | 2017-08-04 17:22:33 | 1:42:02 | 0:56:05 | 0:45:57 | ovh | master | ubuntu | 14.04 | upgrade:jewel-x/stress-split-erasure-code/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 1.5-final-scrub.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-workload/ec-rados-default.yaml 5-finish-upgrade.yaml 6-luminous.yaml 7-final-workload/ec-rados-plugin=jerasure-k=3-m=1.yaml distros/ubuntu_14.04.yaml thrashosds-health.yaml} | 3 | |
fail | 1481425 | 2017-08-04 04:23:54 | 2017-08-04 15:56:07 | 2017-08-04 18:20:09 | 2:24:02 | 0:55:15 | 1:28:47 | ovh | master | centos | 7.3 | upgrade:jewel-x/parallel/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 1.5-final-scrub.yaml 2-workload/cache-pool-snaps.yaml 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-luminous.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/centos_latest.yaml} | 3 | |
Failure Reason:
'wait_until_healthy' reached maximum tries (150) after waiting for 900 seconds |
||||||||||||||
fail | 1481428 | 2017-08-04 04:23:55 | 2017-08-04 16:00:31 | 2017-08-04 18:28:34 | 2:28:03 | 1:19:49 | 1:08:14 | ovh | master | ubuntu | 14.04 | upgrade:jewel-x/parallel/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 1.5-final-scrub.yaml 2-workload/ec-rados-default.yaml 3-upgrade-sequence/upgrade-all.yaml 4-luminous.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/ubuntu_14.04.yaml} | 3 | |
Failure Reason:
"2017-08-04 17:20:03.383954 mon.b mon.0 158.69.71.100:6789/0 148 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)" in cluster log |
||||||||||||||
fail | 1481431 | 2017-08-04 04:23:56 | 2017-08-04 16:01:50 | 2017-08-04 16:43:50 | 0:42:00 | 0:16:48 | 0:25:12 | ovh | master | ubuntu | 16.04 | upgrade:jewel-x/parallel/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 1.5-final-scrub.yaml 2-workload/rados_api.yaml 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-luminous.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/ubuntu_latest.yaml} | 3 | |
Failure Reason:
Command failed on ovh033 with status 22: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph pg dump --format=json' |
||||||||||||||
fail | 1481434 | 2017-08-04 04:23:56 | 2017-08-04 16:03:37 | 2017-08-04 21:03:42 | 5:00:05 | 2:21:36 | 2:38:29 | ovh | master | centos | 7.3 | upgrade:jewel-x/parallel/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 1.5-final-scrub.yaml 2-workload/rados_loadgenbig.yaml 3-upgrade-sequence/upgrade-all.yaml 4-luminous.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/centos_latest.yaml} | 3 | |
Failure Reason:
"2017-08-04 18:53:20.593844 mon.b mon.0 158.69.72.234:6789/0 81 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)" in cluster log |
||||||||||||||
fail | 1481437 | 2017-08-04 04:23:57 | 2017-08-04 16:12:05 | 2017-08-04 17:56:07 | 1:44:02 | 1:09:27 | 0:34:35 | ovh | master | ubuntu | 14.04 | upgrade:jewel-x/parallel/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 1.5-final-scrub.yaml 2-workload/test_rbd_api.yaml 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-luminous.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/ubuntu_14.04.yaml} | 3 | |
Failure Reason:
"2017-08-04 16:59:26.550681 mon.b mon.0 158.69.71.22:6789/0 153 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)" in cluster log |
||||||||||||||
fail | 1481440 | 2017-08-04 04:23:58 | 2017-08-04 16:13:58 | 2017-08-04 18:10:00 | 1:56:02 | 1:15:55 | 0:40:07 | ovh | master | ubuntu | 16.04 | upgrade:jewel-x/parallel/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 1.5-final-scrub.yaml 2-workload/test_rbd_python.yaml 3-upgrade-sequence/upgrade-all.yaml 4-luminous.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/ubuntu_latest.yaml} | 3 | |
Failure Reason:
"2017-08-04 17:06:37.248914 mon.a mon.0 158.69.71.36:6789/0 140 : cluster [WRN] Health check failed: 2 osds down (OSD_DOWN)" in cluster log |
||||||||||||||
dead | 1481443 | 2017-08-04 04:23:58 | 2017-08-04 16:16:43 | 2017-08-05 04:19:20 | 12:02:37 | ovh | master | centos | 7.3 | upgrade:jewel-x/parallel/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 1.5-final-scrub.yaml 2-workload/blogbench.yaml 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-luminous.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/centos_latest.yaml} | 3 | |||
fail | 1481446 | 2017-08-04 04:23:59 | 2017-08-04 16:17:36 | 2017-08-04 17:57:37 | 1:40:01 | 0:45:46 | 0:54:15 | ovh | master | ubuntu | 14.04 | upgrade:jewel-x/point-to-point-x/{distros/ubuntu_14.04.yaml point-to-point-upgrade.yaml} | 3 | |
Failure Reason:
Command failed on ovh083 with status 22: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph pg dump --format=json' |
||||||||||||||
fail | 1481448 | 2017-08-04 04:24:00 | 2017-08-04 16:30:06 | 2017-08-04 18:18:07 | 1:48:01 | 0:56:04 | 0:51:57 | ovh | master | ubuntu | 14.04 | upgrade:jewel-x/parallel/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 1.5-final-scrub.yaml 2-workload/cache-pool-snaps.yaml 3-upgrade-sequence/upgrade-all.yaml 4-luminous.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/ubuntu_14.04.yaml} | 3 | |
Failure Reason:
'wait_until_healthy' reached maximum tries (150) after waiting for 900 seconds |
||||||||||||||
fail | 1481451 | 2017-08-04 04:24:00 | 2017-08-04 16:30:06 | 2017-08-04 17:50:07 | 1:20:01 | 0:34:30 | 0:45:31 | ovh | master | ubuntu | 16.04 | upgrade:jewel-x/parallel/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 1.5-final-scrub.yaml 2-workload/ec-rados-default.yaml 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-luminous.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/ubuntu_latest.yaml} | 3 | |
Failure Reason:
Command failed on ovh016 with status 22: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph pg dump --format=json' |
||||||||||||||
fail | 1481454 | 2017-08-04 04:24:01 | 2017-08-04 16:30:51 | 2017-08-04 18:24:54 | 1:54:03 | 0:16:36 | 1:37:27 | ovh | master | centos | 7.3 | upgrade:jewel-x/parallel/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 1.5-final-scrub.yaml 2-workload/rados_api.yaml 3-upgrade-sequence/upgrade-all.yaml 4-luminous.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/centos_latest.yaml} | 3 | |
Failure Reason:
Command failed (workunit test cls/test_cls_rgw.sh) on ovh029 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=jewel TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_rgw.sh' |
||||||||||||||
fail | 1481457 | 2017-08-04 04:24:02 | 2017-08-04 16:33:33 | 2017-08-04 17:57:34 | 1:24:01 | 0:17:04 | 1:06:57 | ovh | master | ubuntu | 14.04 | upgrade:jewel-x/parallel/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 1.5-final-scrub.yaml 2-workload/rados_loadgenbig.yaml 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-luminous.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/ubuntu_14.04.yaml} | 3 | |
Failure Reason:
Command failed on ovh097 with status 22: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph pg dump --format=json' |
||||||||||||||
fail | 1481460 | 2017-08-04 04:24:02 | 2017-08-04 16:33:42 | 2017-08-04 17:41:42 | 1:08:00 | 0:22:23 | 0:45:37 | ovh | master | ubuntu | 16.04 | upgrade:jewel-x/parallel/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 1.5-final-scrub.yaml 2-workload/test_rbd_api.yaml 3-upgrade-sequence/upgrade-all.yaml 4-luminous.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/ubuntu_latest.yaml} | 3 | |
Failure Reason:
reached maximum tries (50) after waiting for 300 seconds |
||||||||||||||
fail | 1481463 | 2017-08-04 04:24:03 | 2017-08-04 16:33:55 | 2017-08-04 19:19:58 | 2:46:03 | 1:14:58 | 1:31:05 | ovh | master | centos | 7.3 | upgrade:jewel-x/parallel/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 1.5-final-scrub.yaml 2-workload/test_rbd_python.yaml 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-luminous.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/centos_latest.yaml} | 3 | |
Failure Reason:
"2017-08-04 18:17:54.705544 mon.b mon.0 158.69.72.0:6789/0 275 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)" in cluster log |
||||||||||||||
dead | 1481466 | 2017-08-04 04:24:04 | 2017-08-04 16:36:40 | 2017-08-05 04:39:13 | 12:02:33 | ovh | master | ubuntu | 14.04 | upgrade:jewel-x/parallel/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 1.5-final-scrub.yaml 2-workload/blogbench.yaml 3-upgrade-sequence/upgrade-all.yaml 4-luminous.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/ubuntu_14.04.yaml} | 3 | |||
pass | 1481469 | 2017-08-04 04:24:04 | 2017-08-04 16:39:17 | 2017-08-04 19:57:22 | 3:18:05 | 2:14:59 | 1:03:06 | ovh | master | ubuntu | 16.04 | upgrade:jewel-x/stress-split/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 1.5-final-scrub.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-workload/{radosbench.yaml rbd-cls.yaml rbd-import-export.yaml rbd_api.yaml readwrite.yaml snaps-few-objects.yaml} 5-finish-upgrade.yaml 6-luminous.yaml 7-final-workload/{rbd-python.yaml rgw-swift.yaml snaps-many-objects.yaml} distros/ubuntu_latest.yaml thrashosds-health.yaml} | 3 | |
pass | 1481472 | 2017-08-04 04:24:05 | 2017-08-04 16:43:58 | 2017-08-04 20:44:02 | 4:00:04 | 0:54:41 | 3:05:23 | ovh | master | ubuntu | 16.04 | upgrade:jewel-x/stress-split-erasure-code/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 1.5-final-scrub.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-workload/ec-rados-default.yaml 5-finish-upgrade.yaml 6-luminous.yaml 7-final-workload/ec-rados-plugin=jerasure-k=3-m=1.yaml distros/ubuntu_latest.yaml thrashosds-health.yaml} | 3 | |
fail | 1481476 | 2017-08-04 04:24:06 | 2017-08-04 16:44:14 | 2017-08-04 18:14:15 | 1:30:01 | 0:56:04 | 0:33:57 | ovh | master | ubuntu | 16.04 | upgrade:jewel-x/parallel/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 1.5-final-scrub.yaml 2-workload/cache-pool-snaps.yaml 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-luminous.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/ubuntu_latest.yaml} | 3 | |
Failure Reason:
'wait_until_healthy' reached maximum tries (150) after waiting for 900 seconds |
||||||||||||||
fail | 1481478 | 2017-08-04 04:24:06 | 2017-08-04 16:48:56 | 2017-08-04 18:46:58 | 1:58:02 | 0:17:48 | 1:40:14 | ovh | master | centos | 7.3 | upgrade:jewel-x/parallel/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 1.5-final-scrub.yaml 2-workload/ec-rados-default.yaml 3-upgrade-sequence/upgrade-all.yaml 4-luminous.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/centos_latest.yaml} | 3 | |
Failure Reason:
Command failed on ovh068 with status 1: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-osd -f --cluster ceph -i 3' |
||||||||||||||
fail | 1481482 | 2017-08-04 04:24:07 | 2017-08-04 16:54:48 | 2017-08-04 20:14:51 | 3:20:03 | 1:11:04 | 2:08:59 | ovh | master | ubuntu | 14.04 | upgrade:jewel-x/parallel/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 1.5-final-scrub.yaml 2-workload/rados_api.yaml 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-luminous.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/ubuntu_14.04.yaml} | 3 | |
Failure Reason:
"2017-08-04 19:02:37.112647 mon.b mon.0 158.69.73.165:6789/0 115 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)" in cluster log |
||||||||||||||
fail | 1481485 | 2017-08-04 04:24:08 | 2017-08-04 16:57:05 | 2017-08-04 21:35:14 | 4:38:09 | 2:24:12 | 2:13:57 | ovh | master | ubuntu | 16.04 | upgrade:jewel-x/parallel/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 1.5-final-scrub.yaml 2-workload/rados_loadgenbig.yaml 3-upgrade-sequence/upgrade-all.yaml 4-luminous.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/ubuntu_latest.yaml} | 3 | |
Failure Reason:
"2017-08-04 19:24:46.857663 mon.a mon.0 158.69.73.14:6789/0 69 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)" in cluster log |
||||||||||||||
fail | 1481488 | 2017-08-04 04:24:09 | 2017-08-04 16:57:37 | 2017-08-04 18:31:38 | 1:34:01 | 0:12:59 | 1:21:02 | ovh | master | centos | 7.3 | upgrade:jewel-x/parallel/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 1.5-final-scrub.yaml 2-workload/test_rbd_api.yaml 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-luminous.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/centos_latest.yaml} | 3 | |
Failure Reason:
Command failed on ovh028 with status 22: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph pg dump --format=json' |
||||||||||||||
fail | 1481491 | 2017-08-04 04:24:09 | 2017-08-04 17:01:48 | 2017-08-04 19:07:50 | 2:06:02 | 0:13:45 | 1:52:17 | ovh | master | ubuntu | 14.04 | upgrade:jewel-x/parallel/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 1.5-final-scrub.yaml 2-workload/test_rbd_python.yaml 3-upgrade-sequence/upgrade-all.yaml 4-luminous.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/ubuntu_14.04.yaml} | 3 | |
Failure Reason:
Command failed on ovh064 with status 1: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-osd -f --cluster ceph -i 3' |
||||||||||||||
dead | 1481494 | 2017-08-04 04:24:10 | 2017-08-04 17:07:36 | 2017-08-05 05:10:12 | 12:02:36 | ovh | master | ubuntu | 16.04 | upgrade:jewel-x/parallel/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 1.5-final-scrub.yaml 2-workload/blogbench.yaml 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-luminous.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/ubuntu_latest.yaml} | 3 | |||
fail | 1481497 | 2017-08-04 04:24:11 | 2017-08-04 17:15:48 | 2017-08-04 19:43:50 | 2:28:02 | 0:55:02 | 1:33:00 | ovh | master | centos | 7.3 | upgrade:jewel-x/parallel/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 1.5-final-scrub.yaml 2-workload/cache-pool-snaps.yaml 3-upgrade-sequence/upgrade-all.yaml 4-luminous.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/centos_latest.yaml} | 3 | |
Failure Reason:
'wait_until_healthy' reached maximum tries (150) after waiting for 900 seconds |
||||||||||||||
fail | 1481500 | 2017-08-04 04:24:11 | 2017-08-04 17:21:19 | 2017-08-04 18:25:19 | 1:04:00 | 0:13:18 | 0:50:42 | ovh | master | ubuntu | 14.04 | upgrade:jewel-x/parallel/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 1.5-final-scrub.yaml 2-workload/ec-rados-default.yaml 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-luminous.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/ubuntu_14.04.yaml} | 3 | |
Failure Reason:
Command failed on ovh065 with status 22: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph pg dump --format=json' |
||||||||||||||
fail | 1481503 | 2017-08-04 04:24:12 | 2017-08-04 17:22:43 | 2017-08-04 21:16:47 | 3:54:04 | 0:18:47 | 3:35:17 | ovh | master | ubuntu | 16.04 | upgrade:jewel-x/parallel/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 1.5-final-scrub.yaml 2-workload/rados_api.yaml 3-upgrade-sequence/upgrade-all.yaml 4-luminous.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/ubuntu_latest.yaml} | 3 | |
Failure Reason:
Command failed (workunit test cls/test_cls_rgw.sh) on ovh094 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=jewel TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_rgw.sh' |
||||||||||||||
fail | 1481505 | 2017-08-04 04:24:13 | 2017-08-04 17:42:17 | 2017-08-04 21:40:21 | 3:58:04 | 2:26:04 | 1:32:00 | ovh | master | centos | 7.3 | upgrade:jewel-x/parallel/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 1.5-final-scrub.yaml 2-workload/rados_loadgenbig.yaml 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-luminous.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/centos_latest.yaml} | 3 | |
Failure Reason:
"2017-08-04 19:28:31.406326 mon.b mon.0 158.69.73.88:6789/0 81 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)" in cluster log |
||||||||||||||
fail | 1481508 | 2017-08-04 04:24:13 | 2017-08-04 17:42:48 | 2017-08-04 20:04:51 | 2:22:03 | 0:22:59 | 1:59:04 | ovh | master | ubuntu | 14.04 | upgrade:jewel-x/parallel/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 1.5-final-scrub.yaml 2-workload/test_rbd_api.yaml 3-upgrade-sequence/upgrade-all.yaml 4-luminous.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/ubuntu_14.04.yaml} | 3 | |
Failure Reason:
reached maximum tries (50) after waiting for 300 seconds |
||||||||||||||
fail | 1481511 | 2017-08-04 04:24:14 | 2017-08-04 17:49:59 | 2017-08-04 18:43:59 | 0:54:00 | 0:16:37 | 0:37:23 | ovh | master | ubuntu | 16.04 | upgrade:jewel-x/parallel/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 1.5-final-scrub.yaml 2-workload/test_rbd_python.yaml 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-luminous.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/ubuntu_latest.yaml} | 3 | |
Failure Reason:
Command failed on ovh083 with status 22: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph pg dump --format=json' |