User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Fail |
---|---|---|---|---|---|---|---|---|---|
vasu | 2017-09-28 21:06:22 | 2017-09-28 21:06:37 | 2017-09-28 21:56:37 | 0:50:00 | smoke | luminous | ovh | 9915a2f | 4 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
fail | 1683286 | 2017-09-28 21:06:25 | 2017-09-28 21:06:37 | 2017-09-28 21:22:36 | 0:15:59 | 0:10:36 | 0:05:23 | ovh | wip-daemon-helper-systemd | ubuntu | 16.04 | smoke/1node/{clusters/{fixed-1.yaml openstack.yaml} distros/ubuntu_latest.yaml objectstore/filestore-xfs.yaml tasks/ceph-deploy.yaml} | 1 | |
Failure Reason:
'check health' reached maximum tries (6) after waiting for 60 seconds |
||||||||||||||
fail | 1683287 | 2017-09-28 21:06:26 | 2017-09-28 21:06:37 | 2017-09-28 21:40:36 | 0:33:59 | 0:21:07 | 0:12:52 | ovh | wip-daemon-helper-systemd | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/cfuse_workunit_suites_blogbench.yaml} | 3 | |||
Failure Reason:
Command failed on ovh081 with status 1: 'sudo ceph --cluster ceph osd crush tunables default' |
||||||||||||||
fail | 1683288 | 2017-09-28 21:06:26 | 2017-09-28 21:06:37 | 2017-09-28 21:56:37 | 0:50:00 | 0:30:01 | 0:19:59 | ovh | wip-daemon-helper-systemd | centos | 7.3 | smoke/systemd/{clusters/{fixed-4.yaml openstack.yaml} distros/centos_latest.yaml objectstore/filestore-xfs.yaml tasks/systemd.yaml} | 4 | |
Failure Reason:
Command failed on ovh050 with status 1: 'sudo ceph-create-keys --cluster ceph --id ovh050' |
||||||||||||||
fail | 1683289 | 2017-09-28 21:06:27 | 2017-09-28 21:06:36 | 2017-09-28 21:42:36 | 0:36:00 | 0:21:34 | 0:14:26 | ovh | wip-daemon-helper-systemd | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/cfuse_workunit_suites_fsstress.yaml} | 3 | |||
Failure Reason:
Command failed on ovh062 with status 1: 'sudo ceph --cluster ceph osd crush tunables default' |