Name | Machine Type | Up | Locked | Locked Since | Locked By | OS Type | OS Version | Arch | Description |
---|---|---|---|---|---|---|---|---|---|
mira101.front.sepia.ceph.com | mira | True | True | 2022-09-18 17:00:20.221923 | matan@teuthology | x86_64 | None |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
fail | 6860587 | 2022-06-02 18:49:56 | 2022-06-02 18:49:57 | 2022-06-02 19:05:38 | 0:15:41 | mira | i55530 | fs/mixed-clients/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-2c-client conf/{client mds mon osd} kclient-overrides/{distro/testing/k-testing ms-die-on-skipped} objectstore-ec/bluestore-bitmap overrides/{osd-asserts whitelist_health whitelist_wrongly_marked_down} tasks/kernel_cfuse_workunits_untarbuild_blogbench} | 2 | |||||
Failure Reason:
Cannot connect to remote host mira101 |
||||||||||||||
fail | 6860549 | 2022-06-02 17:50:24 | 2022-06-02 17:50:25 | 2022-06-02 17:58:19 | 0:07:54 | mira | i55530 | ubuntu | 20.04 | fs/verify/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu/{latest overrides}} mount/kclient/{k-testing mount ms-die-on-skipped} objectstore-ec/bluestore-comp overrides/{mon-debug session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/1 tasks/dbench validater/valgrind} | 2 | |||
Failure Reason:
everything on the same host must use the same kernel |
||||||||||||||
dead | 6860546 | 2022-06-02 17:49:50 | 2022-06-02 17:49:51 | 2022-06-02 17:49:53 | 0:00:02 | mira | i55530 | centos | 8.stream | fs/thrash/multifs/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-2c-client conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore/bluestore-bitmap overrides/{frag multifs session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} tasks/{1-thrash/mon 2-workunit/cfuse_workunit_snaptests}} | 2 | |||
Failure Reason:
Error reimaging machines: Could not find an image for centos 8.stream |
||||||||||||||
dead | 6860543 | 2022-06-02 17:47:15 | 2022-06-02 17:48:37 | 2022-06-02 17:48:39 | 0:00:02 | mira | i55530 | centos | 8.stream | fs/verify/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{centos_8} mount/kclient/{k-testing mount ms-die-on-skipped} objectstore-ec/bluestore-bitmap overrides/{mon-debug session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/1 tasks/dbench validater/lockdep} | 2 | |||
Failure Reason:
Error reimaging machines: Could not find an image for centos 8.stream |
||||||||||||||
fail | 6860540 | 2022-06-02 17:21:16 | 2022-06-02 17:21:16 | 2022-06-02 17:28:57 | 0:07:41 | mira | i55530 | fs/mixed-clients/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-2c-client conf/{client mds mon osd} kclient-overrides/{distro/testing/k-testing ms-die-on-skipped} objectstore-ec/bluestore-bitmap overrides/{osd-asserts whitelist_health whitelist_wrongly_marked_down} tasks/kernel_cfuse_workunits_dbench_iozone} | 2 | |||||
Failure Reason:
'override' |
||||||||||||||
fail | 6860482 | 2022-06-02 16:56:07 | 2022-06-02 16:58:06 | 2022-06-02 17:05:48 | 0:07:42 | mira | i55530 | fs/mixed-clients/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-2c-client conf/{client mds mon osd} kclient-overrides/{distro/testing/k-testing ms-die-on-skipped} objectstore-ec/bluestore-bitmap overrides/{osd-asserts whitelist_health whitelist_wrongly_marked_down} tasks/kernel_cfuse_workunits_untarbuild_blogbench} | 2 | |||||
Failure Reason:
'override' |
||||||||||||||
fail | 6858565 | 2022-06-01 19:18:26 | 2022-06-01 19:18:26 | 2022-06-01 19:32:14 | 0:13:48 | mira | master | fs/mixed-clients/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-2c-client conf/{client mds mon osd} kclient-overrides/{distro/testing/k-testing ms-die-on-skipped} objectstore-ec/bluestore-bitmap overrides/{osd-asserts whitelist_health whitelist_wrongly_marked_down} tasks/kernel_cfuse_workunits_dbench_iozone} | 2 | |||||
Failure Reason:
Cannot connect to remote host mira030 |
||||||||||||||
fail | 6858859 | 2022-06-01 18:30:21 | 2022-06-01 20:48:23 | 2022-06-01 21:04:00 | 0:15:37 | mira | i55530 | ubuntu | 20.04 | fs/verify/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu/{latest overrides}} mount/kclient/{k-testing mount ms-die-on-skipped} objectstore-ec/bluestore-comp overrides/{mon-debug session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/1 tasks/dbench validater/lockdep} | 2 | |||
Failure Reason:
Cannot connect to remote host mira030 |
||||||||||||||
dead | 6179504 | 2021-06-18 15:30:55 | 2021-06-21 15:47:57 | 2021-06-21 15:52:06 | 0:04:09 | mira | master | centos | 7.8 | ceph-deploy/{cluster/4node config/ceph_volume_filestore distros/centos_latest python_versions/python_2 tasks/rbd_import_export} | 4 | |||
Failure Reason:
Error reimaging machines: Failed to power on mira053 |
||||||||||||||
fail | 6179494 | 2021-06-18 15:30:46 | 2021-06-19 15:53:51 | 2021-06-19 17:14:59 | 1:21:08 | 0:36:24 | 0:44:44 | mira | master | centos | 7.8 | ceph-deploy/{cluster/4node config/ceph_volume_bluestore_dmcrypt distros/centos_latest python_versions/python_3 tasks/rbd_import_export} | 4 | |
Failure Reason:
ceph health was unable to get 'HEALTH_OK' after waiting 15 minutes |
||||||||||||||
dead | 6179493 | 2021-06-18 15:30:45 | 2021-06-19 15:50:10 | 2021-06-19 16:12:14 | 0:22:04 | 0:04:33 | 0:17:31 | mira | master | ubuntu | 18.04 | ceph-deploy/{cluster/4node config/ceph_volume_bluestore distros/ubuntu_latest python_versions/python_2 tasks/ceph-admin-commands} | 4 | |
Failure Reason:
{'Failure object was': {'mira030.front.sepia.ceph.com': {'results': [{'changed': True, 'end': '2021-06-19 16:09:38.201476', 'stdout': '', 'cmd': 'pvremove --force --force --yes [unknown]', 'failed': True, 'delta': '0:00:00.030175', 'stderr': ' WARNING: Device for PV pDxbJq-LzyB-VLMq-ttZk-J9SK-eDlJ-uf9PLb not found or rejected by a filter.\\n WARNING: Device for PV SUBad9-JveN-vWzl-dYBo-196P-LB0l-LDxC4t not found or rejected by a filter.\\n WARNING: Device for PV ClY5iD-8CE1-zXre-UpGY-5Kgj-BqwY-3opRr9 not found or rejected by a filter.\\n WARNING: Device for PV tREbqj-7avy-cUJt-3OG7-HrZz-TLlp-pGoRm4 not found or rejected by a filter.\\n Device [unknown] not found.', 'rc': 5, 'invocation': {'module_args': {'creates': 'None', 'executable': 'None', '_uses_shell': True, 'strip_empty_ends': True, '_raw_params': 'pvremove --force --force --yes [unknown]', 'removes': 'None', 'argv': 'None', 'warn': True, 'chdir': 'None', 'stdin_add_newline': True, 'stdin': 'None'}}, 'start': '2021-06-19 16:09:38.171301', 'msg': 'non-zero return code', 'stdout_lines': [], 'stderr_lines': [' WARNING: Device for PV pDxbJq-LzyB-VLMq-ttZk-J9SK-eDlJ-uf9PLb not found or rejected by a filter.', ' WARNING: Device for PV SUBad9-JveN-vWzl-dYBo-196P-LB0l-LDxC4t not found or rejected by a filter.', ' WARNING: Device for PV ClY5iD-8CE1-zXre-UpGY-5Kgj-BqwY-3opRr9 not found or rejected by a filter.', ' WARNING: Device for PV tREbqj-7avy-cUJt-3OG7-HrZz-TLlp-pGoRm4 not found or rejected by a filter.', ' Device [unknown] not found.'], '_ansible_no_log': False, 'item': '[unknown]', 'ansible_loop_var': 'item', '_ansible_item_label': '[unknown]'}, {'changed': True, 'end': '2021-06-19 16:09:38.383907', 'stdout': '', 'cmd': 'pvremove --force --force --yes [unknown]', 'failed': True, 'delta': '0:00:00.028943', 'stderr': ' WARNING: Device for PV pDxbJq-LzyB-VLMq-ttZk-J9SK-eDlJ-uf9PLb not found or rejected by a filter.\\n WARNING: Device for PV SUBad9-JveN-vWzl-dYBo-196P-LB0l-LDxC4t not found or rejected by a filter.\\n WARNING: Device for PV ClY5iD-8CE1-zXre-UpGY-5Kgj-BqwY-3opRr9 not found or rejected by a filter.\\n WARNING: Device for PV tREbqj-7avy-cUJt-3OG7-HrZz-TLlp-pGoRm4 not found or rejected by a filter.\\n Device [unknown] not found.', 'rc': 5, 'invocation': {'module_args': {'creates': 'None', 'executable': 'None', '_uses_shell': True, 'strip_empty_ends': True, '_raw_params': 'pvremove --force --force --yes [unknown]', 'removes': 'None', 'argv': 'None', 'warn': True, 'chdir': 'None', 'stdin_add_newline': True, 'stdin': 'None'}}, 'start': '2021-06-19 16:09:38.354964', 'msg': 'non-zero return code', 'stdout_lines': [], 'stderr_lines': [' WARNING: Device for PV pDxbJq-LzyB-VLMq-ttZk-J9SK-eDlJ-uf9PLb not found or rejected by a filter.', ' WARNING: Device for PV SUBad9-JveN-vWzl-dYBo-196P-LB0l-LDxC4t not found or rejected by a filter.', ' WARNING: Device for PV ClY5iD-8CE1-zXre-UpGY-5Kgj-BqwY-3opRr9 not found or rejected by a filter.', ' WARNING: Device for PV tREbqj-7avy-cUJt-3OG7-HrZz-TLlp-pGoRm4 not found or rejected by a filter.', ' Device [unknown] not found.'], '_ansible_no_log': False, 'item': '[unknown]', 'ansible_loop_var': 'item', '_ansible_item_label': '[unknown]'}, {'changed': True, 'end': '2021-06-19 16:09:38.575585', 'stdout': '', 'cmd': 'pvremove --force --force --yes [unknown]', 'failed': True, 'delta': '0:00:00.031426', 'stderr': ' WARNING: Device for PV pDxbJq-LzyB-VLMq-ttZk-J9SK-eDlJ-uf9PLb not found or rejected by a filter.\\n WARNING: Device for PV SUBad9-JveN-vWzl-dYBo-196P-LB0l-LDxC4t not found or rejected by a filter.\\n WARNING: Device for PV ClY5iD-8CE1-zXre-UpGY-5Kgj-BqwY-3opRr9 not found or rejected by a filter.\\n WARNING: Device for PV tREbqj-7avy-cUJt-3OG7-HrZz-TLlp-pGoRm4 not found or rejected by a filter.\\n Device [unknown] not found.', 'rc': 5, 'invocation': {'module_args': {'creates': 'None', 'executable': 'None', '_uses_shell': True, 'strip_empty_ends': True, '_raw_params': 'pvremove --force --force --yes [unknown]', 'removes': 'None', 'argv': 'None', 'warn': True, 'chdir': 'None', 'stdin_add_newline': True, 'stdin': 'None'}}, 'start': '2021-06-19 16:09:38.544159', 'msg': 'non-zero return code', 'stdout_lines': [], 'stderr_lines': [' WARNING: Device for PV pDxbJq-LzyB-VLMq-ttZk-J9SK-eDlJ-uf9PLb not found or rejected by a filter.', ' WARNING: Device for PV SUBad9-JveN-vWzl-dYBo-196P-LB0l-LDxC4t not found or rejected by a filter.', ' WARNING: Device for PV ClY5iD-8CE1-zXre-UpGY-5Kgj-BqwY-3opRr9 not found or rejected by a filter.', ' WARNING: Device for PV tREbqj-7avy-cUJt-3OG7-HrZz-TLlp-pGoRm4 not found or rejected by a filter.', ' Device [unknown] not found.'], '_ansible_no_log': False, 'item': '[unknown]', 'ansible_loop_var': 'item', '_ansible_item_label': '[unknown]'}, {'changed': True, 'end': '2021-06-19 16:09:38.758954', 'stdout': '', 'cmd': 'pvremove --force --force --yes [unknown]', 'failed': True, 'delta': '0:00:00.031601', 'stderr': ' WARNING: Device for PV pDxbJq-LzyB-VLMq-ttZk-J9SK-eDlJ-uf9PLb not found or rejected by a filter.\\n WARNING: Device for PV SUBad9-JveN-vWzl-dYBo-196P-LB0l-LDxC4t not found or rejected by a filter.\\n WARNING: Device for PV ClY5iD-8CE1-zXre-UpGY-5Kgj-BqwY-3opRr9 not found or rejected by a filter.\\n WARNING: Device for PV tREbqj-7avy-cUJt-3OG7-HrZz-TLlp-pGoRm4 not found or rejected by a filter.\\n Device [unknown] not found.', 'rc': 5, 'invocation': {'module_args': {'creates': 'None', 'executable': 'None', '_uses_shell': True, 'strip_empty_ends': True, '_raw_params': 'pvremove --force --force --yes [unknown]', 'removes': 'None', 'argv': 'None', 'warn': True, 'chdir': 'None', 'stdin_add_newline': True, 'stdin': 'None'}}, 'start': '2021-06-19 16:09:38.727353', 'msg': 'non-zero return code', 'stdout_lines': [], 'stderr_lines': [' WARNING: Device for PV pDxbJq-LzyB-VLMq-ttZk-J9SK-eDlJ-uf9PLb not found or rejected by a filter.', ' WARNING: Device for PV SUBad9-JveN-vWzl-dYBo-196P-LB0l-LDxC4t not found or rejected by a filter.', ' WARNING: Device for PV ClY5iD-8CE1-zXre-UpGY-5Kgj-BqwY-3opRr9 not found or rejected by a filter.', ' WARNING: Device for PV tREbqj-7avy-cUJt-3OG7-HrZz-TLlp-pGoRm4 not found or rejected by a filter.', ' Device [unknown] not found.'], '_ansible_no_log': False, 'item': '[unknown]', 'ansible_loop_var': 'item', '_ansible_item_label': '[unknown]'}, {'changed': True, 'end': '2021-06-19 16:09:38.945153', 'stdout': ' Labels on physical volume "/dev/sdf" successfully wiped.', 'cmd': 'pvremove --force --force --yes /dev/sdf', 'rc': 0, 'start': '2021-06-19 16:09:38.914374', 'stderr': ' WARNING: Device for PV pDxbJq-LzyB-VLMq-ttZk-J9SK-eDlJ-uf9PLb not found or rejected by a filter.\\n WARNING: Device for PV SUBad9-JveN-vWzl-dYBo-196P-LB0l-LDxC4t not found or rejected by a filter.\\n WARNING: Device for PV ClY5iD-8CE1-zXre-UpGY-5Kgj-BqwY-3opRr9 not found or rejected by a filter.\\n WARNING: Device for PV tREbqj-7avy-cUJt-3OG7-HrZz-TLlp-pGoRm4 not found or rejected by a filter.\\n WARNING: PV /dev/sdf is used by VG vg_hdd.\\n WARNING: Wiping physical volume label from /dev/sdf of volume group "vg_hdd".', 'delta': '0:00:00.030779', 'invocation': {'module_args': {'creates': 'None', 'executable': 'None', '_uses_shell': True, 'strip_empty_ends': True, '_raw_params': 'pvremove --force --force --yes /dev/sdf', 'removes': 'None', 'argv': 'None', 'warn': True, 'chdir': 'None', 'stdin_add_newline': True, 'stdin': 'None'}}, 'stdout_lines': [' Labels on physical volume "/dev/sdf" successfully wiped.'], 'stderr_lines': [' WARNING: Device for PV pDxbJq-LzyB-VLMq-ttZk-J9SK-eDlJ-uf9PLb not found or rejected by a filter.', ' WARNING: Device for PV SUBad9-JveN-vWzl-dYBo-196P-LB0l-LDxC4t not found or rejected by a filter.', ' WARNING: Device for PV ClY5iD-8CE1-zXre-UpGY-5Kgj-BqwY-3opRr9 not found or rejected by a filter.', ' WARNING: Device for PV tREbqj-7avy-cUJt-3OG7-HrZz-TLlp-pGoRm4 not found or rejected by a filter.', ' WARNING: PV /dev/sdf is used by VG vg_hdd.', ' WARNING: Wiping physical volume label from /dev/sdf of volume group "vg_hdd".'], '_ansible_no_log': False, 'failed': False, 'item': '/dev/sdf', 'ansible_loop_var': 'item', '_ansible_item_label': '/dev/sdf'}, {'changed': True, 'end': '2021-06-19 16:09:39.132951', 'stdout': ' Labels on physical volume "/dev/sdg" successfully wiped.', 'cmd': 'pvremove --force --force --yes /dev/sdg', 'rc': 0, 'start': '2021-06-19 16:09:39.101540', 'stderr': ' WARNING: Device for PV pDxbJq-LzyB-VLMq-ttZk-J9SK-eDlJ-uf9PLb not found or rejected by a filter.\\n WARNING: Device for PV SUBad9-JveN-vWzl-dYBo-196P-LB0l-LDxC4t not found or rejected by a filter.\\n WARNING: Device for PV ClY5iD-8CE1-zXre-UpGY-5Kgj-BqwY-3opRr9 not found or rejected by a filter.\\n WARNING: Device for PV tREbqj-7avy-cUJt-3OG7-HrZz-TLlp-pGoRm4 not found or rejected by a filter.\\n WARNING: Device for PV nHy959-NM2M-LZmm-8ZeC-AFDL-30rn-ljVMmW not found or rejected by a filter.\\n WARNING: PV /dev/sdg is used by VG vg_hdd.\\n WARNING: Wiping physical volume label from /dev/sdg of volume group "vg_hdd".', 'delta': '0:00:00.031411', 'invocation': {'module_args': {'creates': 'None', 'executable': 'None', '_uses_shell': True, 'strip_empty_ends': True, '_raw_params': 'pvremove --force --force --yes /dev/sdg', 'removes': 'None', 'argv': 'None', 'warn': True, 'chdir': 'None', 'stdin_add_newline': True, 'stdin': 'None'}}, 'stdout_lines': [' Labels on physical volume "/dev/sdg" successfully wiped.'], 'stderr_lines': [' WARNING: Device for PV pDxbJq-LzyB-VLMq-ttZk-J9SK-eDlJ-uf9PLb not found or rejected by a filter.', ' WARNING: Device for PV SUBad9-JveN-vWzl-dYBo-196P-LB0l-LDxC4t not found or rejected by a filter.', ' WARNING: Device for PV ClY5iD-8CE1-zXre-UpGY-5Kgj-BqwY-3opRr9 not found or rejected by a filter.', ' WARNING: Device for PV tREbqj-7avy-cUJt-3OG7-HrZz-TLlp-pGoRm4 not found or rejected by a filter.', ' WARNING: Device for PV nHy959-NM2M-LZmm-8ZeC-AFDL-30rn-ljVMmW not found or rejected by a filter.', ' WARNING: PV /dev/sdg is used by VG vg_hdd.', ' WARNING: Wiping physical volume label from /dev/sdg of volume group "vg_hdd".'], '_ansible_no_log': False, 'failed': False, 'item': '/dev/sdg', 'ansible_loop_var': 'item', '_ansible_item_label': '/dev/sdg'}], 'changed': True, 'msg': 'All items completed'}}, 'Traceback (most recent call last)': 'File "/home/teuthworker/src/git.ceph.com_git_ceph-cm-ansible_master/callback_plugins/failure_log.py", line 44, in log_failure log.error(yaml.safe_dump(failure)) File "/home/teuthworker/src/git.ceph.com_git_teuthology_f359b10daba6e0103d42ccfc021bc797f3cd7edc/virtualenv/lib/python3.6/site-packages/yaml/__init__.py", line 306, in safe_dump return dump_all([data], stream, Dumper=SafeDumper, **kwds) File "/home/teuthworker/src/git.ceph.com_git_teuthology_f359b10daba6e0103d42ccfc021bc797f3cd7edc/virtualenv/lib/python3.6/site-packages/yaml/__init__.py", line 278, in dump_all dumper.represent(data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_f359b10daba6e0103d42ccfc021bc797f3cd7edc/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 27, in represent node = self.represent_data(data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_f359b10daba6e0103d42ccfc021bc797f3cd7edc/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 48, in represent_data node = self.yaml_representers[data_types[0]](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_f359b10daba6e0103d42ccfc021bc797f3cd7edc/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 207, in represent_dict return self.represent_mapping(\'tag:yaml.org,2002:map\', data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_f359b10daba6e0103d42ccfc021bc797f3cd7edc/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 118, in represent_mapping node_value = self.represent_data(item_value) File "/home/teuthworker/src/git.ceph.com_git_teuthology_f359b10daba6e0103d42ccfc021bc797f3cd7edc/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 48, in represent_data node = self.yaml_representers[data_types[0]](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_f359b10daba6e0103d42ccfc021bc797f3cd7edc/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 207, in represent_dict return self.represent_mapping(\'tag:yaml.org,2002:map\', data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_f359b10daba6e0103d42ccfc021bc797f3cd7edc/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 118, in represent_mapping node_value = self.represent_data(item_value) File "/home/teuthworker/src/git.ceph.com_git_teuthology_f359b10daba6e0103d42ccfc021bc797f3cd7edc/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 48, in represent_data node = self.yaml_representers[data_types[0]](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_f359b10daba6e0103d42ccfc021bc797f3cd7edc/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 199, in represent_list return self.represent_sequence(\'tag:yaml.org,2002:seq\', data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_f359b10daba6e0103d42ccfc021bc797f3cd7edc/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 92, in represent_sequence node_item = self.represent_data(item) File "/home/teuthworker/src/git.ceph.com_git_teuthology_f359b10daba6e0103d42ccfc021bc797f3cd7edc/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 48, in represent_data node = self.yaml_representers[data_types[0]](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_f359b10daba6e0103d42ccfc021bc797f3cd7edc/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 207, in represent_dict return self.represent_mapping(\'tag:yaml.org,2002:map\', data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_f359b10daba6e0103d42ccfc021bc797f3cd7edc/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 118, in represent_mapping node_value = self.represent_data(item_value) File "/home/teuthworker/src/git.ceph.com_git_teuthology_f359b10daba6e0103d42ccfc021bc797f3cd7edc/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 58, in represent_data node = self.yaml_representers[None](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_f359b10daba6e0103d42ccfc021bc797f3cd7edc/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 231, in represent_undefined raise RepresenterError("cannot represent an object", data)', 'yaml.representer.RepresenterError': "('cannot represent an object', '[unknown]')"} |
||||||||||||||
fail | 6179489 | 2021-06-18 15:30:41 | 2021-06-18 17:51:02 | 2021-06-19 15:53:42 | 22:02:40 | 0:37:12 | 21:25:28 | mira | master | centos | 7.8 | ceph-deploy/{cluster/4node config/ceph_volume_bluestore distros/centos_latest python_versions/python_3 tasks/rbd_import_export} | 4 | |
Failure Reason:
ceph health was unable to get 'HEALTH_OK' after waiting 15 minutes |
||||||||||||||
fail | 6179485 | 2021-06-18 15:30:37 | 2021-06-18 16:18:00 | 2021-06-18 17:27:06 | 1:09:06 | 0:36:45 | 0:32:21 | mira | master | centos | 7.8 | ceph-deploy/{cluster/4node config/ceph_volume_bluestore distros/centos_latest python_versions/python_3 tasks/ceph-admin-commands} | 4 | |
Failure Reason:
ceph health was unable to get 'HEALTH_OK' after waiting 15 minutes |
||||||||||||||
dead | 6179482 | 2021-06-18 15:30:34 | 2021-06-18 15:59:07 | 2021-06-18 16:14:16 | 0:15:09 | mira | master | ubuntu | 18.04 | ceph-deploy/{cluster/4node config/ceph_volume_bluestore_dmcrypt distros/ubuntu_latest python_versions/python_3 tasks/ceph-admin-commands} | 4 | |||
Failure Reason:
Error reimaging machines: Failed to power on mira053 |
||||||||||||||
dead | 6179481 | 2021-06-18 15:30:33 | 2021-06-18 15:30:34 | 2021-06-18 16:03:06 | 0:32:32 | mira | master | centos | 7.8 | ceph-deploy/{cluster/4node config/ceph_volume_bluestore distros/centos_latest python_versions/python_2 tasks/rbd_import_export} | 4 | |||
Failure Reason:
Error reimaging machines: Failed to power on mira053 |
||||||||||||||
fail | 6179479 | 2021-06-18 15:30:31 | 2021-06-18 15:30:32 | 2021-06-18 15:59:01 | 0:28:29 | 0:09:32 | 0:18:57 | mira | master | centos | 7.8 | ceph-deploy/{cluster/4node config/ceph_volume_dmcrypt_off distros/centos_latest python_versions/python_2 tasks/ceph-admin-commands} | 4 | |
Failure Reason:
Command failed on mira026 with status 1: 'cd /home/ubuntu/cephtest/ceph-deploy && ./bootstrap 2' |
||||||||||||||
dead | 6124783 | 2021-05-20 10:15:16 | 2021-05-20 10:15:16 | 2021-05-20 10:30:21 | 0:15:05 | mira | master | rhel | 8.3 | rados:cephadm/smoke-roleless/{0-distro/rhel_8.3_kubic_stable 1-start 2-services/mirror 3-final} | 2 | |||
Failure Reason:
Error reimaging machines: reached maximum tries (60) after waiting for 900 seconds |
||||||||||||||
dead | 6124776 | 2021-05-20 10:07:48 | 2021-05-20 10:08:25 | 2021-05-20 10:08:29 | 0:00:04 | mira | master | centos | 8.2 | rados:cephadm/orchestrator_cli/{0-random-distro$/{centos_8.2_kubic_stable} 2-node-mgr orchestrator_cli} | 2 | |||
Failure Reason:
Error reimaging machines: Could not find an image for centos 8.2 |
||||||||||||||
dead | 6092033 | 2021-05-03 05:55:42 | 2021-05-03 23:56:27 | 2021-05-04 00:54:23 | 0:57:56 | mira | master | ubuntu | 18.04 | ceph-deploy/{cluster/4node config/ceph_volume_filestore distros/ubuntu_latest python_versions/python_2 tasks/ceph-admin-commands} | 4 | |||
Failure Reason:
Error reimaging machines: Failed to power on mira053 |
||||||||||||||
fail | 6092032 | 2021-05-03 05:55:41 | 2021-05-03 23:51:35 | 2021-05-04 00:49:52 | 0:58:17 | 0:36:44 | 0:21:33 | mira | master | centos | 7.8 | ceph-deploy/{cluster/4node config/ceph_volume_dmcrypt_off distros/centos_latest python_versions/python_3 tasks/rbd_import_export} | 4 | |
Failure Reason:
ceph health was unable to get 'HEALTH_OK' after waiting 15 minutes |