Name | Machine Type | Up | Locked | Locked Since | Locked By | OS Type | OS Version | Arch | Description |
---|---|---|---|---|---|---|---|---|---|
ovh017.front.sepia.ceph.com | ovh | False | False | ubuntu | 16.04 | x86_64 | None |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
fail | 3266224 | 2018-11-17 16:45:44 | 2018-11-17 16:46:20 | 2018-11-17 17:10:20 | 0:24:00 | 0:07:22 | 0:16:38 | ovh | master | rhel | 7.5 | rgw/hadoop-s3a/{hadoop/v27.yaml s3a-hadoop.yaml supported-random-distro$/{rhel_latest.yaml}} | 4 | |
Failure Reason:
Command failed on ovh084 with status 2: 'cd ~/ceph-ansible ; virtualenv --system-site-packages venv ; source venv/bin/activate ; pip install --upgrade pip ; pip install setuptools>=11.3 notario>=0.0.13 netaddr ansible==2.5 ; ANSIBLE_STDOUT_CALLBACK=debug ansible-playbook -vv -i inven.yml site.yml' |
||||||||||||||
pass | 3259809 | 2018-11-16 04:23:25 | 2018-11-16 13:28:58 | 2018-11-16 16:03:00 | 2:34:02 | 1:48:51 | 0:45:11 | ovh | master | centos | 7.4 | upgrade:jewel-x/ceph-deploy/{distros/centos_latest.yaml jewel-luminous.yaml slow_requests.yaml} | 4 | |
pass | 3250988 | 2018-11-13 05:01:06 | 2018-11-13 09:27:00 | 2018-11-13 10:15:00 | 0:48:00 | 0:29:27 | 0:18:33 | ovh | master | ubuntu | 16.04 | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rbd_python_api_tests.yaml} | 3 | |
fail | 3245908 | 2018-11-11 00:00:31 | 2018-11-11 00:48:46 | 2018-11-11 01:40:55 | 0:52:09 | 0:05:31 | 0:46:38 | ovh | master | rhel | 7.4 | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/libcephfs_interface_tests.yaml} | 3 | |
Failure Reason:
Command failed on ovh017 with status 1: 'sudo yum -y install ceph-radosgw' |
||||||||||||||
dead | 3245620 | 2018-11-10 05:23:19 | 2018-11-17 20:00:37 | 2018-11-18 08:22:18 | 12:21:41 | ovh | master | kcephfs/recovery/{clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} mounts/kmounts.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} tasks/journal-repair.yaml whitelist_health.yaml} | 6 | |||||
pass | 3245596 | 2018-11-10 05:23:01 | 2018-11-17 17:10:18 | 2018-11-17 18:42:19 | 1:32:01 | 1:09:10 | 0:22:51 | ovh | master | kcephfs/cephfs/{clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} tasks/kclient_workunit_misc.yaml} | 3 | |||
pass | 3245573 | 2018-11-10 05:22:44 | 2018-11-17 14:52:18 | 2018-11-17 16:24:19 | 1:32:01 | 0:22:05 | 1:09:56 | ovh | master | kcephfs/recovery/{clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} mounts/kmounts.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} tasks/mds-flush.yaml whitelist_health.yaml} | 6 | |||
dead | 3245520 | 2018-11-10 05:22:05 | 2018-11-16 21:36:03 | 2018-11-17 09:43:32 | 12:07:29 | ovh | master | kcephfs/recovery/{clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} mounts/kmounts.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} tasks/journal-repair.yaml whitelist_health.yaml} | 6 | |||||
pass | 3245507 | 2018-11-10 05:21:55 | 2018-11-16 20:09:55 | 2018-11-16 21:45:56 | 1:36:01 | 0:34:33 | 1:01:28 | ovh | master | kcephfs/recovery/{clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} mounts/kmounts.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} tasks/damage.yaml whitelist_health.yaml} | 6 | |||
pass | 3245495 | 2018-11-10 05:21:46 | 2018-11-16 18:41:36 | 2018-11-16 19:37:37 | 0:56:01 | 0:22:24 | 0:33:37 | ovh | master | kcephfs/recovery/{clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} mounts/kmounts.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} tasks/backtrace.yaml whitelist_health.yaml} | 6 | |||
pass | 3245460 | 2018-11-10 05:21:20 | 2018-11-16 15:41:21 | 2018-11-16 17:05:22 | 1:24:01 | 0:38:09 | 0:45:52 | ovh | master | kcephfs/recovery/{clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} mounts/kmounts.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} tasks/data-scan.yaml whitelist_health.yaml} | 6 | |||
fail | 3245441 | 2018-11-10 05:21:05 | 2018-11-16 01:23:03 | 2018-11-16 03:33:05 | 2:10:02 | 1:45:56 | 0:24:06 | ovh | master | kcephfs/mixed-clients/{clusters/1-mds-2-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} tasks/kernel_cfuse_workunits_dbench_iozone.yaml} | 4 | |||
Failure Reason:
"2018-11-16 02:20:09.910842 mon.a mon.0 158.69.70.14:6789/0 287 : cluster [WRN] Health check failed: Degraded data redundancy: 5687/22844 objects degraded (24.895%), 9 pgs degraded (PG_DEGRADED)" in cluster log |
||||||||||||||
pass | 3245431 | 2018-11-10 05:20:57 | 2018-11-15 23:52:59 | 2018-11-16 01:23:00 | 1:30:01 | 0:52:21 | 0:37:40 | ovh | master | kcephfs/cephfs/{clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} tasks/kclient_workunit_suites_fsx.yaml} | 3 | |||
pass | 3245417 | 2018-11-10 05:20:47 | 2018-11-15 22:04:48 | 2018-11-15 23:22:49 | 1:18:01 | 0:25:35 | 0:52:26 | ovh | master | kcephfs/recovery/{clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} mounts/kmounts.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} tasks/forward-scrub.yaml whitelist_health.yaml} | 6 | |||
pass | 3245404 | 2018-11-10 05:20:37 | 2018-11-15 20:40:26 | 2018-11-15 22:06:27 | 1:26:01 | 0:22:37 | 1:03:24 | ovh | master | kcephfs/recovery/{clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} mounts/kmounts.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} tasks/config-commands.yaml whitelist_health.yaml} | 6 | |||
pass | 3245397 | 2018-11-10 05:20:32 | 2018-11-15 19:34:12 | 2018-11-15 20:26:12 | 0:52:00 | 0:29:42 | 0:22:18 | ovh | master | kcephfs/cephfs/{clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} tasks/kclient_workunit_o_trunc.yaml} | 3 | |||
fail | 3226503 | 2018-11-05 05:23:14 | 2018-11-11 21:56:50 | 2018-11-11 23:32:51 | 1:36:01 | 0:58:24 | 0:37:37 | ovh | master | centos | 7.4 | kcephfs/thrash/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} thrashers/default.yaml thrashosds-health.yaml whitelist_health.yaml workloads/kclient_workunit_suites_iozone.yaml} | 3 | |
Failure Reason:
"2018-11-11 22:53:06.919489 mon.b mon.0 158.69.69.77:6789/0 1992 : cluster [WRN] Health check failed: 1 MDSs report slow metadata IOs (MDS_SLOW_METADATA_IO)" in cluster log |
||||||||||||||
pass | 3226482 | 2018-11-05 05:22:59 | 2018-11-11 20:22:25 | 2018-11-11 21:16:26 | 0:54:01 | 0:31:07 | 0:22:54 | ovh | master | centos | 7.4 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/kclient_workunit_o_trunc.yaml} | 3 | |
fail | 3226463 | 2018-11-05 05:22:45 | 2018-11-11 18:38:08 | 2018-11-11 20:22:09 | 1:44:01 | 1:25:13 | 0:18:48 | ovh | master | centos | 7.4 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/kclient_workunit_suites_ffsb.yaml} | 3 | |
Failure Reason:
"2018-11-11 19:27:34.116455 mon.b mon.0 158.69.68.192:6789/0 330 : cluster [ERR] Health check failed: mon c is very low on available space (MON_DISK_CRIT)" in cluster log |
||||||||||||||
fail | 3226452 | 2018-11-05 05:22:37 | 2018-11-11 18:00:03 | 2018-11-11 18:38:03 | 0:38:00 | 0:14:47 | 0:23:13 | ovh | master | ubuntu | 18.04 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/forward-scrub.yaml whitelist_health.yaml} | 6 | |
Failure Reason:
{'ovh017.front.sepia.ceph.com': {'_ansible_parsed': True, 'stderr_lines': ['/bin/sh: 1: ifdown: not found'], 'cmd': 'ifdown ens3 && ifup ens3', 'end': '2018-11-11 18:28:41.618770', '_ansible_no_log': False, 'stdout': '', 'changed': True, 'invocation': {'module_args': {'creates': None, 'executable': None, 'chdir': None, '_raw_params': 'ifdown ens3 && ifup ens3', 'removes': None, 'warn': True, '_uses_shell': True, 'stdin': None}}, 'start': '2018-11-11 18:28:41.612310', 'delta': '0:00:00.006460', 'stderr': '/bin/sh: 1: ifdown: not found', 'rc': 127, 'msg': 'non-zero return code', 'stdout_lines': []}, 'ovh083.front.sepia.ceph.com': {'_ansible_parsed': True, 'stderr_lines': ['/bin/sh: 1: ifdown: not found'], 'cmd': 'ifdown ens3 && ifup ens3', 'end': '2018-11-11 18:31:48.439320', '_ansible_no_log': False, 'stdout': '', 'changed': True, 'invocation': {'module_args': {'creates': None, 'executable': None, 'chdir': None, '_raw_params': 'ifdown ens3 && ifup ens3', 'removes': None, 'warn': True, '_uses_shell': True, 'stdin': None}}, 'start': '2018-11-11 18:31:48.433525', 'delta': '0:00:00.005795', 'stderr': '/bin/sh: 1: ifdown: not found', 'rc': 127, 'msg': 'non-zero return code', 'stdout_lines': []}, 'ovh012.front.sepia.ceph.com': {'_ansible_parsed': True, 'stderr_lines': ['/bin/sh: 1: ifdown: not found'], 'cmd': 'ifdown ens3 && ifup ens3', 'end': '2018-11-11 18:28:41.255890', '_ansible_no_log': False, 'stdout': '', 'changed': True, 'invocation': {'module_args': {'creates': None, 'executable': None, 'chdir': None, '_raw_params': 'ifdown ens3 && ifup ens3', 'removes': None, 'warn': True, '_uses_shell': True, 'stdin': None}}, 'start': '2018-11-11 18:28:41.250057', 'delta': '0:00:00.005833', 'stderr': '/bin/sh: 1: ifdown: not found', 'rc': 127, 'msg': 'non-zero return code', 'stdout_lines': []}, 'ovh032.front.sepia.ceph.com': {'_ansible_parsed': True, 'stderr_lines': ['/bin/sh: 1: ifdown: not found'], 'cmd': 'ifdown ens3 && ifup ens3', 'end': '2018-11-11 18:31:20.024473', '_ansible_no_log': False, 'stdout': '', 'changed': True, 'invocation': {'module_args': {'creates': None, 'executable': None, 'chdir': None, '_raw_params': 'ifdown ens3 && ifup ens3', 'removes': None, 'warn': True, '_uses_shell': True, 'stdin': None}}, 'start': '2018-11-11 18:31:20.018424', 'delta': '0:00:00.006049', 'stderr': '/bin/sh: 1: ifdown: not found', 'rc': 127, 'msg': 'non-zero return code', 'stdout_lines': []}, 'ovh042.front.sepia.ceph.com': {'_ansible_parsed': True, 'stderr_lines': ['/bin/sh: 1: ifdown: not found'], 'cmd': 'ifdown ens3 && ifup ens3', 'end': '2018-11-11 18:29:11.039990', '_ansible_no_log': False, 'stdout': '', 'changed': True, 'invocation': {'module_args': {'creates': None, 'executable': None, 'chdir': None, '_raw_params': 'ifdown ens3 && ifup ens3', 'removes': None, 'warn': True, '_uses_shell': True, 'stdin': None}}, 'start': '2018-11-11 18:29:11.033924', 'delta': '0:00:00.006066', 'stderr': '/bin/sh: 1: ifdown: not found', 'rc': 127, 'msg': 'non-zero return code', 'stdout_lines': []}, 'ovh068.front.sepia.ceph.com': {'_ansible_parsed': True, 'stderr_lines': ['/bin/sh: 1: ifdown: not found'], 'cmd': 'ifdown ens3 && ifup ens3', 'end': '2018-11-11 18:29:11.163858', '_ansible_no_log': False, 'stdout': '', 'changed': True, 'invocation': {'module_args': {'creates': None, 'executable': None, 'chdir': None, '_raw_params': 'ifdown ens3 && ifup ens3', 'removes': None, 'warn': True, '_uses_shell': True, 'stdin': None}}, 'start': '2018-11-11 18:29:11.157526', 'delta': '0:00:00.006332', 'stderr': '/bin/sh: 1: ifdown: not found', 'rc': 127, 'msg': 'non-zero return code', 'stdout_lines': []}} |