User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Fail | Dead |
---|---|---|---|---|---|---|---|---|---|---|
teuthology | 2017-07-15 04:23:02 | 2017-07-15 04:23:57 | 2017-07-15 16:50:23 | 12:26:26 | upgrade:jewel-x | master | ovh | 0cc6519 | 48 | 2 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
dead | 1402115 | 2017-07-15 04:23:45 | 2017-07-15 04:23:50 | 2017-07-15 16:26:25 | 12:02:35 | ovh | master | centos | 7.3 | upgrade:jewel-x/parallel/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 1.5-final-scrub.yaml 2-workload/blogbench.yaml 3-upgrade-sequence/upgrade-all.yaml 4-luminous.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/centos_latest.yaml} | 3 | |||
fail | 1402119 | 2017-07-15 04:23:45 | 2017-07-15 04:23:55 | 2017-07-15 05:49:58 | 1:26:03 | 1:07:03 | 0:19:00 | ovh | master | centos | 7.3 | upgrade:jewel-x/point-to-point-x/{distros/centos_7.3.yaml point-to-point-upgrade.yaml} | 3 | |
Failure Reason:
Command failed (workunit test rados/test-upgrade-v11.0.0.sh) on ovh085 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=jewel TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test-upgrade-v11.0.0.sh' |
||||||||||||||
fail | 1402122 | 2017-07-15 04:23:46 | 2017-07-15 04:23:53 | 2017-07-15 07:21:50 | 2:57:57 | 2:41:59 | 0:15:58 | ovh | master | centos | 7.3 | upgrade:jewel-x/stress-split/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 1.5-final-scrub.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-workload/{radosbench.yaml rbd-cls.yaml rbd-import-export.yaml rbd_api.yaml readwrite.yaml snaps-few-objects.yaml} 5-finish-upgrade.yaml 6-luminous.yaml 7-final-workload/{rbd-python.yaml rgw-swift.yaml snaps-many-objects.yaml} distros/centos_latest.yaml} | 3 | |
Failure Reason:
"2017-07-15 04:48:18.913588 mon.a mon.0 158.69.70.98:6789/0 45 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)" in cluster log |
||||||||||||||
fail | 1402125 | 2017-07-15 04:23:47 | 2017-07-15 04:23:49 | 2017-07-15 05:47:49 | 1:24:00 | 1:06:57 | 0:17:03 | ovh | master | centos | 7.3 | upgrade:jewel-x/stress-split-erasure-code/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 1.5-final-scrub.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-workload/ec-rados-default.yaml 5-finish-upgrade.yaml 6-luminous.yaml 7-final-workload/ec-rados-plugin=jerasure-k=3-m=1.yaml distros/centos_latest.yaml} | 3 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 1402128 | 2017-07-15 04:23:47 | 2017-07-15 04:23:50 | 2017-07-15 04:57:49 | 0:33:59 | 0:21:33 | 0:12:26 | ovh | master | ubuntu | 14.04 | upgrade:jewel-x/parallel/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 1.5-final-scrub.yaml 2-workload/cache-pool-snaps.yaml 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-luminous.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/ubuntu_14.04.yaml} | 3 | |
Failure Reason:
Command failed on ovh054 with status 1: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-osd -f --cluster ceph -i 0' |
||||||||||||||
fail | 1402130 | 2017-07-15 04:23:48 | 2017-07-15 04:23:51 | 2017-07-15 06:15:51 | 1:52:00 | 1:39:13 | 0:12:47 | ovh | master | ubuntu | 16.04 | upgrade:jewel-x/parallel/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 1.5-final-scrub.yaml 2-workload/ec-rados-default.yaml 3-upgrade-sequence/upgrade-all.yaml 4-luminous.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/ubuntu_latest.yaml} | 3 | |
Failure Reason:
"2017-07-15 04:49:02.586055 mon.b mon.0 158.69.71.103:6789/0 83 : cluster [ERR] Health check failed: 1 mds daemon down (MDS_FAILED)" in cluster log |
||||||||||||||
fail | 1402133 | 2017-07-15 04:23:49 | 2017-07-15 04:23:51 | 2017-07-15 06:11:52 | 1:48:01 | 1:29:50 | 0:18:11 | ovh | master | centos | 7.3 | upgrade:jewel-x/parallel/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 1.5-final-scrub.yaml 2-workload/rados_api.yaml 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-luminous.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/centos_latest.yaml} | 3 | |
Failure Reason:
"2017-07-15 05:08:38.775557 mon.a mon.0 158.69.70.59:6789/0 418 : cluster [ERR] Health check failed: 1 mds daemon down (MDS_FAILED)" in cluster log |
||||||||||||||
fail | 1402137 | 2017-07-15 04:23:50 | 2017-07-15 04:23:55 | 2017-07-15 06:11:52 | 1:47:57 | 1:33:13 | 0:14:44 | ovh | master | ubuntu | 14.04 | upgrade:jewel-x/parallel/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 1.5-final-scrub.yaml 2-workload/rados_loadgenbig.yaml 3-upgrade-sequence/upgrade-all.yaml 4-luminous.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/ubuntu_14.04.yaml} | 3 | |
Failure Reason:
"2017-07-15 04:48:49.718802 mon.a mon.0 158.69.70.101:6789/0 75 : cluster [ERR] Health check failed: 1 mds daemon down (MDS_FAILED)" in cluster log |
||||||||||||||
fail | 1402140 | 2017-07-15 04:23:50 | 2017-07-15 04:23:52 | 2017-07-15 05:55:53 | 1:32:01 | 1:21:04 | 0:10:57 | ovh | master | ubuntu | 16.04 | upgrade:jewel-x/parallel/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 1.5-final-scrub.yaml 2-workload/test_rbd_api.yaml 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-luminous.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/ubuntu_latest.yaml} | 3 | |
Failure Reason:
"2017-07-15 04:50:16.189435 mon.a mon.0 158.69.70.84:6789/0 534 : cluster [ERR] Health check failed: 1 mds daemon down (MDS_FAILED)" in cluster log |
||||||||||||||
fail | 1402142 | 2017-07-15 04:23:51 | 2017-07-15 04:23:54 | 2017-07-15 05:59:54 | 1:36:00 | 1:15:58 | 0:20:02 | ovh | master | centos | 7.3 | upgrade:jewel-x/parallel/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 1.5-final-scrub.yaml 2-workload/test_rbd_python.yaml 3-upgrade-sequence/upgrade-all.yaml 4-luminous.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/centos_latest.yaml} | 3 | |
Failure Reason:
"2017-07-15 04:53:31.512561 mon.a mon.0 158.69.70.79:6789/0 87 : cluster [ERR] Health check failed: 1 mds daemon down (MDS_FAILED)" in cluster log |
||||||||||||||
fail | 1402145 | 2017-07-15 04:23:52 | 2017-07-15 04:23:54 | 2017-07-15 04:47:52 | 0:23:58 | 0:11:36 | 0:12:22 | ovh | master | ubuntu | 14.04 | upgrade:jewel-x/parallel/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 1.5-final-scrub.yaml 2-workload/blogbench.yaml 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-luminous.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/ubuntu_14.04.yaml} | 3 | |
Failure Reason:
Command failed on ovh009 with status 2: 'sudo ceph --cluster ceph osd crush create-or-move osd.2 1.0 host=localhost root=default' |
||||||||||||||
fail | 1402148 | 2017-07-15 04:23:52 | 2017-07-15 04:23:57 | 2017-07-15 04:47:54 | 0:23:57 | 0:13:04 | 0:10:53 | ovh | master | ubuntu | 16.04 | upgrade:jewel-x/parallel/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 1.5-final-scrub.yaml 2-workload/cache-pool-snaps.yaml 3-upgrade-sequence/upgrade-all.yaml 4-luminous.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/ubuntu_latest.yaml} | 3 | |
Failure Reason:
Command failed on ovh012 with status 2: 'sudo ceph --cluster ceph osd crush create-or-move osd.2 1.0 host=localhost root=default' |
||||||||||||||
fail | 1402151 | 2017-07-15 04:23:53 | 2017-07-15 04:23:56 | 2017-07-15 06:31:56 | 2:08:00 | 1:34:49 | 0:33:11 | ovh | master | centos | 7.3 | upgrade:jewel-x/parallel/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 1.5-final-scrub.yaml 2-workload/ec-rados-default.yaml 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-luminous.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/centos_latest.yaml} | 3 | |
Failure Reason:
"2017-07-15 05:11:05.013420 mon.a mon.0 158.69.71.175:6789/0 667 : cluster [ERR] Health check failed: 1 mds daemon down (MDS_FAILED)" in cluster log |
||||||||||||||
fail | 1402154 | 2017-07-15 04:23:54 | 2017-07-15 04:23:56 | 2017-07-15 04:47:55 | 0:23:59 | 0:11:27 | 0:12:32 | ovh | master | ubuntu | 14.04 | upgrade:jewel-x/parallel/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 1.5-final-scrub.yaml 2-workload/rados_api.yaml 3-upgrade-sequence/upgrade-all.yaml 4-luminous.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/ubuntu_14.04.yaml} | 3 | |
Failure Reason:
Command failed on ovh018 with status 2: 'sudo ceph --cluster ceph osd crush create-or-move osd.2 1.0 host=localhost root=default' |
||||||||||||||
fail | 1402158 | 2017-07-15 04:23:54 | 2017-07-15 04:23:57 | 2017-07-15 05:31:56 | 1:07:59 | 0:45:20 | 0:22:39 | ovh | master | ubuntu | 16.04 | upgrade:jewel-x/parallel/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 1.5-final-scrub.yaml 2-workload/rados_loadgenbig.yaml 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-luminous.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/ubuntu_latest.yaml} | 3 | |
Failure Reason:
'wait_until_healthy' reached maximum tries (150) after waiting for 900 seconds |
||||||||||||||
fail | 1402160 | 2017-07-15 04:23:55 | 2017-07-15 04:23:57 | 2017-07-15 04:51:56 | 0:27:59 | 0:08:37 | 0:19:22 | ovh | master | centos | 7.3 | upgrade:jewel-x/parallel/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 1.5-final-scrub.yaml 2-workload/test_rbd_api.yaml 3-upgrade-sequence/upgrade-all.yaml 4-luminous.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/centos_latest.yaml} | 3 | |
Failure Reason:
Command failed on ovh091 with status 2: 'sudo ceph --cluster ceph osd crush create-or-move osd.2 1.0 host=localhost root=default' |
||||||||||||||
fail | 1402163 | 2017-07-15 04:23:56 | 2017-07-15 04:24:08 | 2017-07-15 07:34:14 | 3:10:06 | 1:18:13 | 1:51:53 | ovh | master | ubuntu | 14.04 | upgrade:jewel-x/parallel/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 1.5-final-scrub.yaml 2-workload/test_rbd_python.yaml 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-luminous.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/ubuntu_14.04.yaml} | 3 | |
Failure Reason:
"2017-07-15 06:29:27.799674 mon.a mon.0 158.69.73.172:6789/0 510 : cluster [ERR] Health check failed: 1 mds daemon down (MDS_FAILED)" in cluster log |
||||||||||||||
fail | 1402166 | 2017-07-15 04:23:56 | 2017-07-15 04:24:01 | 2017-07-15 05:55:59 | 1:31:58 | 0:12:25 | 1:19:33 | ovh | master | ubuntu | 16.04 | upgrade:jewel-x/parallel/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 1.5-final-scrub.yaml 2-workload/blogbench.yaml 3-upgrade-sequence/upgrade-all.yaml 4-luminous.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/ubuntu_latest.yaml} | 3 | |
Failure Reason:
Command failed on ovh019 with status 2: 'sudo ceph --cluster ceph osd crush create-or-move osd.2 1.0 host=localhost root=default' |
||||||||||||||
fail | 1402169 | 2017-07-15 04:23:57 | 2017-07-15 04:23:59 | 2017-07-15 07:28:03 | 3:04:04 | 2:33:00 | 0:31:04 | ovh | master | ubuntu | 14.04 | upgrade:jewel-x/stress-split/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 1.5-final-scrub.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-workload/{radosbench.yaml rbd-cls.yaml rbd-import-export.yaml rbd_api.yaml readwrite.yaml snaps-few-objects.yaml} 5-finish-upgrade.yaml 6-luminous.yaml 7-final-workload/{rbd-python.yaml rgw-swift.yaml snaps-many-objects.yaml} distros/ubuntu_14.04.yaml} | 3 | |
Failure Reason:
"2017-07-15 05:02:52.999357 mon.a mon.0 158.69.71.225:6789/0 45 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)" in cluster log |
||||||||||||||
fail | 1402172 | 2017-07-15 04:23:58 | 2017-07-15 04:24:00 | 2017-07-15 06:00:01 | 1:36:01 | 1:04:08 | 0:31:53 | ovh | master | ubuntu | 14.04 | upgrade:jewel-x/stress-split-erasure-code/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 1.5-final-scrub.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-workload/ec-rados-default.yaml 5-finish-upgrade.yaml 6-luminous.yaml 7-final-workload/ec-rados-plugin=jerasure-k=3-m=1.yaml distros/ubuntu_14.04.yaml} | 3 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 1402175 | 2017-07-15 04:23:59 | 2017-07-15 04:24:15 | 2017-07-15 06:36:15 | 2:12:00 | 1:28:23 | 0:43:37 | ovh | master | centos | 7.3 | upgrade:jewel-x/parallel/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 1.5-final-scrub.yaml 2-workload/cache-pool-snaps.yaml 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-luminous.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/centos_latest.yaml} | 3 | |
Failure Reason:
"2017-07-15 05:21:57.306526 mon.b mon.0 158.69.71.30:6789/0 305 : cluster [ERR] Health check failed: 1 mds daemon down (MDS_FAILED)" in cluster log |
||||||||||||||
fail | 1402178 | 2017-07-15 04:23:59 | 2017-07-15 04:24:21 | 2017-07-15 05:58:20 | 1:33:59 | 0:09:31 | 1:24:28 | ovh | master | ubuntu | 14.04 | upgrade:jewel-x/parallel/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 1.5-final-scrub.yaml 2-workload/ec-rados-default.yaml 3-upgrade-sequence/upgrade-all.yaml 4-luminous.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/ubuntu_14.04.yaml} | 3 | |
Failure Reason:
Command failed on ovh034 with status 2: 'sudo ceph --cluster ceph osd crush create-or-move osd.2 1.0 host=localhost root=default' |
||||||||||||||
fail | 1402181 | 2017-07-15 04:24:00 | 2017-07-15 04:24:24 | 2017-07-15 06:54:26 | 2:30:02 | 1:21:43 | 1:08:19 | ovh | master | ubuntu | 16.04 | upgrade:jewel-x/parallel/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 1.5-final-scrub.yaml 2-workload/rados_api.yaml 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-luminous.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/ubuntu_latest.yaml} | 3 | |
Failure Reason:
"2017-07-15 05:46:31.738045 mon.a mon.0 158.69.72.129:6789/0 160 : cluster [WRN] Health check failed: 1/3 mons down, quorum a,b (MON_DOWN)" in cluster log |
||||||||||||||
fail | 1402184 | 2017-07-15 04:24:01 | 2017-07-15 04:24:26 | 2017-07-15 06:06:32 | 1:42:06 | 0:19:19 | 1:22:47 | ovh | master | centos | 7.3 | upgrade:jewel-x/parallel/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 1.5-final-scrub.yaml 2-workload/rados_loadgenbig.yaml 3-upgrade-sequence/upgrade-all.yaml 4-luminous.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/centos_latest.yaml} | 3 | |
Failure Reason:
reached maximum tries (50) after waiting for 300 seconds |
||||||||||||||
fail | 1402187 | 2017-07-15 04:24:01 | 2017-07-15 04:24:11 | 2017-07-15 07:12:19 | 2:48:08 | 1:14:25 | 1:33:43 | ovh | master | ubuntu | 14.04 | upgrade:jewel-x/parallel/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 1.5-final-scrub.yaml 2-workload/test_rbd_api.yaml 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-luminous.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/ubuntu_14.04.yaml} | 3 | |
Failure Reason:
"2017-07-15 06:12:36.387665 mon.a mon.0 158.69.72.32:6789/0 538 : cluster [ERR] Health check failed: 1 mds daemon down (MDS_FAILED)" in cluster log |
||||||||||||||
fail | 1402190 | 2017-07-15 04:24:02 | 2017-07-15 04:24:22 | 2017-07-15 07:48:23 | 3:24:01 | 1:20:24 | 2:03:37 | ovh | master | ubuntu | 16.04 | upgrade:jewel-x/parallel/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 1.5-final-scrub.yaml 2-workload/test_rbd_python.yaml 3-upgrade-sequence/upgrade-all.yaml 4-luminous.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/ubuntu_latest.yaml} | 3 | |
Failure Reason:
"2017-07-15 06:40:21.103433 mon.b mon.0 158.69.73.26:6789/0 58 : cluster [ERR] Health check failed: 1 mds daemon down (MDS_FAILED)" in cluster log |
||||||||||||||
fail | 1402193 | 2017-07-15 04:24:03 | 2017-07-15 04:24:19 | 2017-07-15 06:14:21 | 1:50:02 | 1:19:20 | 0:30:42 | ovh | master | centos | 7.3 | upgrade:jewel-x/parallel/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 1.5-final-scrub.yaml 2-workload/blogbench.yaml 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-luminous.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/centos_latest.yaml} | 3 | |
Failure Reason:
"2017-07-15 05:09:40.817946 mon.a mon.0 158.69.71.170:6789/0 267 : cluster [ERR] Health check failed: 1 mds daemon down (MDS_FAILED)" in cluster log |
||||||||||||||
fail | 1402196 | 2017-07-15 04:24:03 | 2017-07-15 04:24:24 | 2017-07-15 07:18:26 | 2:54:02 | 1:09:28 | 1:44:34 | ovh | master | ubuntu | 14.04 | upgrade:jewel-x/point-to-point-x/{distros/ubuntu_14.04.yaml point-to-point-upgrade.yaml} | 3 | |
Failure Reason:
Command failed (workunit test rados/test-upgrade-v11.0.0.sh) on ovh037 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=jewel TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test-upgrade-v11.0.0.sh' |
||||||||||||||
fail | 1402199 | 2017-07-15 04:24:04 | 2017-07-15 04:24:25 | 2017-07-15 04:58:25 | 0:34:00 | 0:11:06 | 0:22:54 | ovh | master | ubuntu | 14.04 | upgrade:jewel-x/parallel/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 1.5-final-scrub.yaml 2-workload/cache-pool-snaps.yaml 3-upgrade-sequence/upgrade-all.yaml 4-luminous.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/ubuntu_14.04.yaml} | 3 | |
Failure Reason:
Command failed on ovh099 with status 2: 'sudo ceph --cluster ceph osd crush create-or-move osd.2 1.0 host=localhost root=default' |
||||||||||||||
fail | 1402202 | 2017-07-15 04:24:05 | 2017-07-15 04:24:19 | 2017-07-15 05:34:19 | 1:10:00 | 0:24:22 | 0:45:38 | ovh | master | ubuntu | 16.04 | upgrade:jewel-x/parallel/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 1.5-final-scrub.yaml 2-workload/ec-rados-default.yaml 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-luminous.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/ubuntu_latest.yaml} | 3 | |
Failure Reason:
Command failed on ovh034 with status 1: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-osd -f --cluster ceph -i 3' |
||||||||||||||
fail | 1402205 | 2017-07-15 04:24:06 | 2017-07-15 04:35:33 | 2017-07-15 06:13:35 | 1:38:02 | 1:09:06 | 0:28:56 | ovh | master | centos | 7.3 | upgrade:jewel-x/parallel/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 1.5-final-scrub.yaml 2-workload/rados_api.yaml 3-upgrade-sequence/upgrade-all.yaml 4-luminous.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/centos_latest.yaml} | 3 | |
Failure Reason:
"2017-07-15 05:14:41.277052 mon.b mon.0 158.69.71.221:6789/0 100 : cluster [ERR] Health check failed: 1 mds daemon down (MDS_FAILED)" in cluster log |
||||||||||||||
fail | 1402208 | 2017-07-15 04:24:06 | 2017-07-15 04:37:36 | 2017-07-15 05:11:32 | 0:33:56 | 0:10:31 | 0:23:25 | ovh | master | ubuntu | 14.04 | upgrade:jewel-x/parallel/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 1.5-final-scrub.yaml 2-workload/rados_loadgenbig.yaml 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-luminous.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/ubuntu_14.04.yaml} | 3 | |
Failure Reason:
Command failed on ovh091 with status 2: 'sudo ceph --cluster ceph osd crush create-or-move osd.2 1.0 host=localhost root=default' |
||||||||||||||
fail | 1402212 | 2017-07-15 04:24:07 | 2017-07-15 04:39:31 | 2017-07-15 06:17:29 | 1:37:58 | 0:10:51 | 1:27:07 | ovh | master | ubuntu | 16.04 | upgrade:jewel-x/parallel/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 1.5-final-scrub.yaml 2-workload/test_rbd_api.yaml 3-upgrade-sequence/upgrade-all.yaml 4-luminous.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/ubuntu_latest.yaml} | 3 | |
Failure Reason:
Command failed on ovh019 with status 2: 'sudo ceph --cluster ceph osd crush create-or-move osd.2 1.0 host=localhost root=default' |
||||||||||||||
fail | 1402215 | 2017-07-15 04:24:08 | 2017-07-15 04:43:40 | 2017-07-15 05:37:38 | 0:53:58 | 0:08:48 | 0:45:10 | ovh | master | centos | 7.3 | upgrade:jewel-x/parallel/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 1.5-final-scrub.yaml 2-workload/test_rbd_python.yaml 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-luminous.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/centos_latest.yaml} | 3 | |
Failure Reason:
Command failed on ovh091 with status 2: 'sudo ceph --cluster ceph osd crush create-or-move osd.2 1.0 host=localhost root=default' |
||||||||||||||
dead | 1402218 | 2017-07-15 04:24:09 | 2017-07-15 04:47:55 | 2017-07-15 16:50:23 | 12:02:28 | ovh | master | ubuntu | 14.04 | upgrade:jewel-x/parallel/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 1.5-final-scrub.yaml 2-workload/blogbench.yaml 3-upgrade-sequence/upgrade-all.yaml 4-luminous.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/ubuntu_14.04.yaml} | 3 | |||
fail | 1402221 | 2017-07-15 04:24:09 | 2017-07-15 04:47:57 | 2017-07-15 07:41:59 | 2:54:02 | 2:41:05 | 0:12:57 | ovh | master | ubuntu | 16.04 | upgrade:jewel-x/stress-split/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 1.5-final-scrub.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-workload/{radosbench.yaml rbd-cls.yaml rbd-import-export.yaml rbd_api.yaml readwrite.yaml snaps-few-objects.yaml} 5-finish-upgrade.yaml 6-luminous.yaml 7-final-workload/{rbd-python.yaml rgw-swift.yaml snaps-many-objects.yaml} distros/ubuntu_latest.yaml} | 3 | |
Failure Reason:
"2017-07-15 05:10:40.773699 mon.a mon.0 158.69.71.252:6789/0 58 : cluster [WRN] Health check failed: 2 osds down (OSD_DOWN)" in cluster log |
||||||||||||||
fail | 1402224 | 2017-07-15 04:24:10 | 2017-07-15 04:47:57 | 2017-07-15 06:39:58 | 1:52:01 | 0:10:38 | 1:41:23 | ovh | master | ubuntu | 16.04 | upgrade:jewel-x/stress-split-erasure-code/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 1.5-final-scrub.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-workload/ec-rados-default.yaml 5-finish-upgrade.yaml 6-luminous.yaml 7-final-workload/ec-rados-plugin=jerasure-k=3-m=1.yaml distros/ubuntu_latest.yaml} | 3 | |
Failure Reason:
Command failed on ovh090 with status 2: 'sudo ceph --cluster ceph osd crush create-or-move osd.3 1.0 host=localhost root=default' |
||||||||||||||
fail | 1402228 | 2017-07-15 04:24:11 | 2017-07-15 04:52:01 | 2017-07-15 06:32:00 | 1:39:59 | 0:25:19 | 1:14:40 | ovh | master | ubuntu | 16.04 | upgrade:jewel-x/parallel/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 1.5-final-scrub.yaml 2-workload/cache-pool-snaps.yaml 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-luminous.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/ubuntu_latest.yaml} | 3 | |
Failure Reason:
Command failed on ovh081 with status 2: 'sudo ceph --cluster ceph osd crush create-or-move osd.2 1.0 host=localhost root=default' |
||||||||||||||
fail | 1402230 | 2017-07-15 04:24:11 | 2017-07-15 04:57:51 | 2017-07-15 06:53:51 | 1:56:00 | 1:24:26 | 0:31:34 | ovh | master | centos | 7.3 | upgrade:jewel-x/parallel/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 1.5-final-scrub.yaml 2-workload/ec-rados-default.yaml 3-upgrade-sequence/upgrade-all.yaml 4-luminous.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/centos_latest.yaml} | 3 | |
Failure Reason:
"2017-07-15 05:39:04.803278 mon.a mon.0 158.69.71.9:6789/0 76 : cluster [ERR] Health check failed: 1 mds daemon down (MDS_FAILED)" in cluster log |
||||||||||||||
fail | 1402232 | 2017-07-15 04:24:12 | 2017-07-15 04:58:35 | 2017-07-15 06:46:39 | 1:48:04 | 1:14:02 | 0:34:02 | ovh | master | ubuntu | 14.04 | upgrade:jewel-x/parallel/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 1.5-final-scrub.yaml 2-workload/rados_api.yaml 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-luminous.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/ubuntu_14.04.yaml} | 3 | |
Failure Reason:
"2017-07-15 05:48:28.842890 mon.b mon.0 158.69.72.112:6789/0 281 : cluster [ERR] Health check failed: 1 mds daemon down (MDS_FAILED)" in cluster log |
||||||||||||||
fail | 1402235 | 2017-07-15 04:24:12 | 2017-07-15 05:11:34 | 2017-07-15 09:05:38 | 3:54:04 | 1:37:08 | 2:16:56 | ovh | master | ubuntu | 16.04 | upgrade:jewel-x/parallel/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 1.5-final-scrub.yaml 2-workload/rados_loadgenbig.yaml 3-upgrade-sequence/upgrade-all.yaml 4-luminous.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/ubuntu_latest.yaml} | 3 | |
Failure Reason:
"2017-07-15 07:43:53.553917 mon.a mon.0 158.69.75.154:6789/0 48 : cluster [ERR] Health check failed: 1 mds daemon down (MDS_FAILED)" in cluster log |
||||||||||||||
fail | 1402238 | 2017-07-15 04:24:13 | 2017-07-15 05:13:27 | 2017-07-15 06:37:28 | 1:24:01 | 0:09:00 | 1:15:01 | ovh | master | centos | 7.3 | upgrade:jewel-x/parallel/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 1.5-final-scrub.yaml 2-workload/test_rbd_api.yaml 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-luminous.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/centos_latest.yaml} | 3 | |
Failure Reason:
Command failed on ovh062 with status 2: 'sudo ceph --cluster ceph osd crush create-or-move osd.2 1.0 host=localhost root=default' |
||||||||||||||
fail | 1402241 | 2017-07-15 04:24:14 | 2017-07-15 05:21:34 | 2017-07-15 06:57:34 | 1:36:00 | 1:20:08 | 0:15:52 | ovh | master | ubuntu | 14.04 | upgrade:jewel-x/parallel/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 1.5-final-scrub.yaml 2-workload/test_rbd_python.yaml 3-upgrade-sequence/upgrade-all.yaml 4-luminous.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/ubuntu_14.04.yaml} | 3 | |
Failure Reason:
"2017-07-15 05:48:13.045958 mon.a mon.0 158.69.72.136:6789/0 80 : cluster [ERR] Health check failed: 1 mds daemon down (MDS_FAILED)" in cluster log |
||||||||||||||
fail | 1402244 | 2017-07-15 04:24:14 | 2017-07-15 05:21:40 | 2017-07-15 07:03:40 | 1:42:00 | 1:19:07 | 0:22:53 | ovh | master | ubuntu | 16.04 | upgrade:jewel-x/parallel/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 1.5-final-scrub.yaml 2-workload/blogbench.yaml 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-luminous.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/ubuntu_latest.yaml} | 3 | |
Failure Reason:
"2017-07-15 06:00:35.216130 mon.b mon.0 158.69.72.188:6789/0 193 : cluster [ERR] Health check failed: 1 mds daemon down (MDS_FAILED)" in cluster log |
||||||||||||||
fail | 1402247 | 2017-07-15 04:24:15 | 2017-07-15 05:25:40 | 2017-07-15 07:45:42 | 2:20:02 | 1:27:33 | 0:52:29 | ovh | master | centos | 7.3 | upgrade:jewel-x/parallel/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 1.5-final-scrub.yaml 2-workload/cache-pool-snaps.yaml 3-upgrade-sequence/upgrade-all.yaml 4-luminous.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/centos_latest.yaml} | 3 | |
Failure Reason:
"2017-07-15 06:29:55.587140 mon.b mon.0 158.69.72.83:6789/0 118 : cluster [ERR] Health check failed: 1 mds daemon down (MDS_FAILED)" in cluster log |
||||||||||||||
fail | 1402250 | 2017-07-15 04:24:16 | 2017-07-15 05:29:41 | 2017-07-15 07:55:43 | 2:26:02 | 1:33:55 | 0:52:07 | ovh | master | ubuntu | 14.04 | upgrade:jewel-x/parallel/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 1.5-final-scrub.yaml 2-workload/ec-rados-default.yaml 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-luminous.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/ubuntu_14.04.yaml} | 3 | |
Failure Reason:
"2017-07-15 06:38:01.789122 mon.b mon.0 158.69.73.2:6789/0 430 : cluster [ERR] Health check failed: 1 mds daemon down (MDS_FAILED)" in cluster log |
||||||||||||||
fail | 1402253 | 2017-07-15 04:24:16 | 2017-07-15 05:32:00 | 2017-07-15 06:30:01 | 0:58:01 | 0:14:03 | 0:43:58 | ovh | master | ubuntu | 16.04 | upgrade:jewel-x/parallel/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 1.5-final-scrub.yaml 2-workload/rados_api.yaml 3-upgrade-sequence/upgrade-all.yaml 4-luminous.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/ubuntu_latest.yaml} | 3 | |
Failure Reason:
Command failed on ovh066 with status 2: 'sudo ceph --cluster ceph osd crush create-or-move osd.2 1.0 host=localhost root=default' |
||||||||||||||
fail | 1402256 | 2017-07-15 04:24:17 | 2017-07-15 05:34:31 | 2017-07-15 07:40:34 | 2:06:03 | 0:43:54 | 1:22:09 | ovh | master | centos | 7.3 | upgrade:jewel-x/parallel/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 1.5-final-scrub.yaml 2-workload/rados_loadgenbig.yaml 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-luminous.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/centos_latest.yaml} | 3 | |
Failure Reason:
'wait_until_healthy' reached maximum tries (150) after waiting for 900 seconds |
||||||||||||||
fail | 1402259 | 2017-07-15 04:24:18 | 2017-07-15 05:37:42 | 2017-07-15 06:19:41 | 0:41:59 | 0:21:37 | 0:20:22 | ovh | master | ubuntu | 14.04 | upgrade:jewel-x/parallel/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 1.5-final-scrub.yaml 2-workload/test_rbd_api.yaml 3-upgrade-sequence/upgrade-all.yaml 4-luminous.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/ubuntu_14.04.yaml} | 3 | |
Failure Reason:
reached maximum tries (50) after waiting for 300 seconds |
||||||||||||||
fail | 1402262 | 2017-07-15 04:24:19 | 2017-07-15 05:48:22 | 2017-07-15 06:22:23 | 0:34:01 | 0:13:19 | 0:20:42 | ovh | master | ubuntu | 16.04 | upgrade:jewel-x/parallel/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 1.5-final-scrub.yaml 2-workload/test_rbd_python.yaml 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-luminous.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/ubuntu_latest.yaml} | 3 | |
Failure Reason:
Command failed on ovh034 with status 2: 'sudo ceph --cluster ceph osd crush create-or-move osd.2 1.0 host=localhost root=default' |