Name | Machine Type | Up | Locked | Locked Since | Locked By | OS Type | OS Version | Arch | Description |
---|---|---|---|---|---|---|---|---|---|
mira038.front.sepia.ceph.com | mira | False | False | ubuntu | 18.04 | x86_64 | To be e-wasted 29JAN2020 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
dead | 4705463 | 2020-01-25 03:59:49 | 2020-01-26 04:57:28 | 2020-01-26 17:00:01 | 12:02:33 | mira | master | ubuntu | 18.04 | ceph-deploy/{cluster/4node.yaml config/ceph_volume_dmcrypt_off.yaml distros/ubuntu_latest.yaml python_versions/python_3.yaml tasks/ceph-admin-commands.yaml} | 4 | |||
pass | 4705442 | 2020-01-25 03:59:34 | 2020-01-25 03:59:57 | 2020-01-25 04:35:57 | 0:36:00 | 0:23:54 | 0:12:06 | mira | master | ubuntu | 18.04 | ceph-deploy/{cluster/4node.yaml config/ceph_volume_bluestore_dmcrypt.yaml distros/ubuntu_latest.yaml python_versions/python_3.yaml tasks/ceph-admin-commands.yaml} | 4 | |
fail | 4700054 | 2020-01-24 05:58:38 | 2020-01-24 17:16:31 | 2020-01-24 17:38:30 | 0:21:59 | 0:07:43 | 0:14:16 | mira | master | centos | 7.4 | ceph-deploy/basic/{ceph-deploy-overrides/disable_diff_journal_disk.yaml config_options/cephdeploy_conf.yaml distros/centos_7.4.yaml objectstore/bluestore-bitmap.yaml python_versions/python_2.yaml tasks/ceph-admin-commands.yaml} | 2 | |
Failure Reason:
Command failed on mira016 with status 5: 'sudo stop ceph-all || sudo service ceph stop || sudo systemctl stop ceph.target' |
||||||||||||||
fail | 4700043 | 2020-01-24 05:58:31 | 2020-01-24 16:36:01 | 2020-01-24 17:18:01 | 0:42:00 | 0:07:42 | 0:34:18 | mira | master | centos | 7.4 | ceph-deploy/basic/{ceph-deploy-overrides/enable_dmcrypt_diff_journal_disk.yaml config_options/cephdeploy_conf.yaml distros/centos_7.4.yaml objectstore/filestore-xfs.yaml python_versions/python_3.yaml tasks/ceph-admin-commands.yaml} | 2 | |
Failure Reason:
Command failed on mira016 with status 5: 'sudo stop ceph-all || sudo service ceph stop || sudo systemctl stop ceph.target' |
||||||||||||||
fail | 4700041 | 2020-01-24 05:58:30 | 2020-01-24 16:34:03 | 2020-01-24 16:56:02 | 0:21:59 | 0:07:28 | 0:14:31 | mira | master | centos | 7.4 | ceph-deploy/basic/{ceph-deploy-overrides/disable_diff_journal_disk.yaml config_options/cephdeploy_conf.yaml distros/centos_7.4.yaml objectstore/filestore-xfs.yaml python_versions/python_3.yaml tasks/ceph-admin-commands.yaml} | 2 | |
Failure Reason:
Command failed on mira016 with status 5: 'sudo stop ceph-all || sudo service ceph stop || sudo systemctl stop ceph.target' |
||||||||||||||
fail | 4700037 | 2020-01-24 05:58:27 | 2020-01-24 15:56:44 | 2020-01-24 16:34:45 | 0:38:01 | 0:07:52 | 0:30:09 | mira | master | centos | 7.4 | ceph-deploy/basic/{ceph-deploy-overrides/enable_diff_journal_disk.yaml config_options/cephdeploy_conf.yaml distros/centos_7.4.yaml objectstore/filestore-xfs.yaml python_versions/python_3.yaml tasks/ceph-admin-commands.yaml} | 2 | |
Failure Reason:
Command failed on mira016 with status 5: 'sudo stop ceph-all || sudo service ceph stop || sudo systemctl stop ceph.target' |
||||||||||||||
fail | 4700031 | 2020-01-24 05:58:23 | 2020-01-24 15:07:49 | 2020-01-24 16:09:50 | 1:02:01 | 0:04:03 | 0:57:58 | mira | master | ubuntu | 16.04 | ceph-deploy/ceph-volume/{cluster/4node.yaml config/ceph_volume_filestore.yaml distros/ubuntu_latest.yaml tasks/rbd_import_export.yaml} | 4 | |
Failure Reason:
Command failed on mira038 with status 1: 'cd /home/ubuntu/cephtest/ceph-deploy && ./bootstrap' |
||||||||||||||
fail | 4700022 | 2020-01-24 05:58:17 | 2020-01-24 14:09:09 | 2020-01-24 15:55:10 | 1:46:01 | 0:50:58 | 0:55:03 | mira | master | centos | 7.4 | ceph-deploy/ceph-volume/{cluster/4node.yaml config/ceph_volume_dmcrypt_off.yaml distros/centos_latest.yaml tasks/rbd_import_export.yaml} | 4 | |
Failure Reason:
at yum will try to contact the repo. when it runs most commands,', u' so will have to try and fail each time (and thus. yum will be be much', u' slower). If it is a very temporary problem though, this is often a nice', u' compromise:', u'', u' yum-config-manager --save --setopt=epel.skip_if_unavailable=true', u'', u'failure: repodata/d0878257236e2d6fa937b34d0f8b5a3551ee8b90f497fd308734bd43c44fe6b4-filelists.sqlite.bz2 from epel: [Errno 256] No more mirrors to try.', u'http://dl.fedoraproject.org/pub/epel/7/x86_64/repodata/d0878257236e2d6fa937b34d0f8b5a3551ee8b90f497fd308734bd43c44fe6b4-filelists.sqlite.bz2: [Errno 14] curl#18 - "transfer closed with 10659519 bytes remaining to read"'], u'changed': True, u'stderr': u'http://dl.fedoraproject.org/pub/epel/7/x86_64/repodata/d0878257236e2d6fa937b34d0f8b5a3551ee8b90f497fd308734bd43c44fe6b4-filelists.sqlite.bz2: [Errno 14] curl#18 - "transfer closed with 11510262 bytes remaining to read"\nTrying other mirror.\nhttp://dl.fedoraproject.org/pub/epel/7/x86_64/repodata/d0878257236e2d6fa937b34d0f8b5a3551ee8b90f497fd308734bd43c44fe6b4-filelists.sqlite.bz2: [Errno 14] curl#18 - "transfer closed with 10823378 bytes remaining to read"\nTrying other mirror.\nhttp://dl.fedoraproject.org/pub/epel/7/x86_64/repodata/d0878257236e2d6fa937b34d0f8b5a3551ee8b90f497fd308734bd43c44fe6b4-filelists.sqlite.bz2: [Errno 14] curl#18 - "transfer closed with 10659519 bytes remaining to read"\nTrying other mirror.\n\n\n One of the configured repositories failed (Extra Packages for Enterprise Linux),\n and yum doesn\'t have enough cached data to continue. At this point the only\n safe thing yum can do is fail. There are a few ways to work "fix" this:\n\n 1. Contact the upstream for the repository and get them to fix the problem.\n\n 2. Reconfigure the baseurl/etc. for the repository, to point to a working\n upstream. This is most often useful if you are using a newer\n distribution release than is supported by the repository (and the\n packages for the previous distribution release still work).\n\n 3. Run the command with the repository temporarily disabled\n yum --disablerepo=epel ...\n\n 4. Disable the repository permanently, so yum won\'t use it by default. Yum\n will then just ignore the repository until you permanently enable it\n again or use --enablerepo for temporary usage:\n\n yum-config-manager --disable epel\n or\n subscription-manager repos --disable=epel\n\n 5. Configure the failing repository to be skipped, if it is unavailable.\n Note that yum will try to contact the repo. when it runs most commands,\n so will have to try and fail each time (and thus. yum will be be much\n slower). If it is a very temporary problem though, this is often a nice\n compromise:\n\n yum-config-manager --save --setopt=epel.skip_if_unavailable=true\n\nfailure: repodata/d0878257236e2d6fa937b34d0f8b5a3551ee8b90f497fd308734bd43c44fe6b4-filelists.sqlite.bz2 from epel: [Errno 256] No more mirrors to try.\nhttp://dl.fedoraproject.org/pub/epel/7/x86_64/repodata/d0878257236e2d6fa937b34d0f8b5a3551ee8b90f497fd308734bd43c44fe6b4-filelists.sqlite.bz2: [Errno 14] curl#18 - "transfer closed with 10659519 bytes remaining to read"', u'stdout': u'Loaded plugins: fastestmirror, langpacks, priorities\nCleaning repos: base centos7-fcgi-ceph epel extras lab-extras updates\nCleaning up everything\nMaybe you want: rm -rf /var/cache/yum, to also free up space taken by orphaned data from disabled or removed repos\nLoaded plugins: fastestmirror, langpacks, priorities\nDetermining fastest mirrors\n * base: packages.oit.ncsu.edu\n * extras: repos-va.psychz.net\n * updates: mirror.math.princeton.edu', u'msg': u'non-zero return code', u'delta': u'0:08:26.800225', 'stdout_lines': [u'Loaded plugins: fastestmirror, langpacks, priorities', u'Cleaning repos: base centos7-fcgi-ceph epel extras lab-extras updates', u'Cleaning up everything', u'Maybe you want: rm -rf /var/cache/yum, to also free up space taken by orphaned data from disabled or removed repos', u'Loaded plugins: fastestmirror, langpacks, priorities', u'Determining fastest mirrors', u' * base: packages.oit.ncsu.edu', u' * extras: repos-va.psychz.net', u' * updates: mirror.math.princeton.edu'], u'end': u'2020-01-24 15:13:07.165847', '_ansible_no_log': False, u'cmd': u'rm -rf /var/cache/yum/*; yum clean all; yum makecache', u'start': u'2020-01-24 15:04:40.365622', u'warnings': [u"Consider using the file module with state=absent rather than running 'rm'. If you need to use command because file is insufficient you can add 'warn: false' to this command task or set 'command_warnings=False' in ansible.cfg to get rid of this message."], u'rc': 1, u'invocation': {u'module_args': {u'warn': True, u'executable': None, u'_uses_shell': True, u'strip_empty_ends': True, u'_raw_params': u'rm -rf /var/cache/yum/*; yum clean all; yum makecache', u'removes': None, u'argv': None, u'creates': None, u'chdir': None, u'stdin_add_newline': True, u'stdin': None}}}}Traceback (most recent call last): File "/home/teuthworker/src/git.ceph.com_git_ceph-cm-ansible_master/callback_plugins/failure_log.py", line 44, in log_failure log.error(yaml.safe_dump(failure)) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/__init__.py", line 309, in safe_dump return dump_all([data], stream, Dumper=SafeDumper, **kwds) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/__init__.py", line 281, in dump_all dumper.represent(data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 29, in represent node = self.represent_data(data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 58, in represent_data node = self.yaml_representers[data_types[0]](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 227, in represent_dict return self.represent_mapping(u'tag:yaml.org,2002:map', data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 125, in represent_mapping node_value = self.represent_data(item_value) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 58, in represent_data node = self.yaml_representers[data_types[0]](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 227, in represent_dict return self.represent_mapping(u'tag:yaml.org,2002:map', data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 125, in represent_mapping node_value = self.represent_data(item_value) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 68, in represent_data node = self.yaml_representers[None](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 251, in represent_undefined raise RepresenterError("cannot represent an object", data)RepresenterError: ('cannot represent an object', u'rm -rf /var/cache/yum/*; yum clean all; yum makecache')Failure object was: {'mira063.front.sepia.ceph.com': {'stderr_lines': [u'http://dl.fedoraproject.org/pub/epel/7/x86_64/repodata/d0878257236e2d6fa937b34d0f8b5a3551ee8b90f497fd308734bd43c44fe6b4-filelists.sqlite.bz2: [Errno 14] curl#18 - "transfer closed with 11252794 bytes remaining to read"', u'Trying other mirror.', u'http://dl.fedoraproject.org/pub/epel/7/x86_64/repodata/847ea7712ecf18929572ab1c59abfa4500e779f22773ff681df1d42620aa4fc3-updateinfo.xml.zck: [Errno 14] curl#18 - "transfer closed with 713578 bytes remaining to read"', u'Trying other mirror.', u'http://dl.fedoraproject.org/pub/epel/7/x86_64/repodata/0fec39353862c351582595daa30f48c5a1ff4750fb7e440af99c7fd85d969919-other.sqlite.bz2: [Errno 14] curl#18 - "transfer closed with 803464 bytes remaining to read"', u'Trying other mirror.', u'http://dl.fedoraproject.org/pub/epel/7/x86_64/repodata/d0878257236e2d6fa937b34d0f8b5a3551ee8b90f497fd308734bd43c44fe6b4-filelists.sqlite.bz2: [Errno 14] curl#18 - "transfer closed with 3038678 bytes remaining to read"', u'Trying other mirror.', u"http://dl.fedoraproject.org/pub/epel/7/x86_64/repodata/d0878257236e2d6fa937b34d0f8b5a3551ee8b90f497fd308734bd43c44fe6b4-filelists.sqlite.bz2: [Errno 12] Timeout on http://dl.fedoraproject.org/pub/epel/7/x86_64/repodata/d0878257236e2d6fa937b34d0f8b5a3551ee8b90f497fd308734bd43c44fe6b4-filelists.sqlite.bz2: (28, 'Operation too slow. Less than 1000 bytes/sec transferred the last 300 seconds')", u'Trying other mirror.', u'http://dl.fedoraproject.org/pub/epel/7/x86_64/repodata/d0878257236e2d6fa937b34d0f8b5a3551ee8b90f497fd308734bd43c44fe6b4-filelists.sqlite.bz2: [Errno 14] curl#18 - "transfer closed with 2607162 bytes remaining to read"', u'Trying other mirror.', u'', u'', u' One of the configured repositories failed (Extra Packages for Enterprise Linux),', u" and yum doesn't have enough cached data to continue. At this point the only", u' safe thing yum can do is fail. There are a few ways to work "fix" this:', u'', u' 1. Contact the upstream for the repository and get them to fix the problem.', u'', u' 2. Reconfigure the baseurl/etc. for the repository, to point to a working', u' upstream. This is most often useful if you are using a newer', u' distribution release than is supported by the repository (and the', u' packages for the previous distribution release still work).', u'', u' 3. Run the command with the repository temporarily disabled', u' yum --disablerepo=epel ...', u'', u" 4. Disable the repository permanently, so yum won't use it by default. Yum", u' will then just ignore the repository until you permanently enable it', u' again or use --enablerepo for temporary usage:', u'', u' yum-config-manager --disable epel', u' or', u' subscription-manager repos --disable=epel', u'', u' 5. Configure the failing repository to be skipped, if it is unavailable.', u' Note that yum will try to contact the repo. when it runs most commands,', u' so will have to try and fail each time (and thus. yum will be be much', u' slower). If it is a very temporary problem though, this is often a nice', u' compromise:', u'', u' yum-config-manager --save --setopt=epel.skip_if_unavailable=true', u'', u'failure: repodata/d0878257236e2d6fa937b34d0f8b5a3551ee8b90f497fd308734bd43c44fe6b4-filelists.sqlite.bz2 from epel: [Errno 256] No more mirrors to try.', u"http://dl.fedoraproject.org/pub/epel/7/x86_64/repodata/d0878257236e2d6fa937b34d0f8b5a3551ee8b90f497fd308734bd43c44fe6b4-filelists.sqlite.bz2: [Errno 12] Timeout on http://dl.fedoraproject.org/pub/epel/7/x86_64/repodata/d0878257236e2d6fa937b34d0f8b5a3551ee8b90f497fd308734bd43c44fe6b4-filelists.sqlite.bz2: (28, 'Operation too slow. Less than 1000 bytes/sec transferred the last 300 seconds')", u'http://dl.fedoraproject.org/pub/epel/7/x86_64/repodata/d0878257236e2d6fa937b34d0f8b5a3551ee8b90f497fd308734bd43c44fe6b4-filelists.sqlite.bz2: [Errno 14] curl#18 - "transfer closed with 2607162 bytes remaining to read"'], u'changed': True, u'stderr': u'http://dl.fedoraproject.org/pub/epel/7/x86_64/repodata/d0878257236e2d6fa937b34d0f8b5a3551ee8b90f497fd308734bd43c44fe6b4-filelists.sqlite.bz2: [Errno 14] curl#18 - "transfer closed with 11252794 bytes remaining to read"\nTrying other mirror.\nhttp://dl.fedoraproject.org/pub/epel/7/x86_64/repodata/847ea7712ecf18929572ab1c59abfa4500e779f22773ff681df1d42620aa4fc3-updateinfo.xml.zck: [Errno 14] curl#18 - "transfer closed with 713578 bytes remaining to read"\nTrying other mirror.\nhttp://dl.fedoraproject.org/pub/epel/7/x86_64/repodata/0fec39353862c351582595daa30f48c5a1ff4750fb7e440af99c7fd85d969919-other.sqlite.bz2: [Errno 14] curl#18 - "transfer closed with 803464 bytes remaining to read"\nTrying other mirror.\nhttp://dl.fedoraproject.org/pub/epel/7/x86_64/repodata/d0878257236e2d6fa937b34d0f8b5a3551ee8b90f497fd308734bd43c44fe6b4-filelists.sqlite.bz2: [Errno 14] curl#18 - "transfer closed with 3038678 bytes remaining to read"\nTrying other mirror.\nhttp://dl.fedoraproject.org/pub/epel/7/x86_64/repodata/d0878257236e2d6fa937b34d0f8b5a3551ee8b90f497fd308734bd43c44fe6b4-filelists.sqlite.bz2: [Errno 12] Timeout on http://dl.fedoraproject.org/pub/epel/7/x86_64/repodata/d0878257236e2d6fa937b34d0f8b5a3551ee8b90f497fd308734bd43c44fe6b4-filelists.sqlite.bz2: (28, \'Operation too slow. Less than 1000 bytes/sec transferred the last 300 seconds\')\nTrying other mirror.\nhttp://dl.fedoraproject.org/pub/epel/7/x86_64/repodata/d0878257236e2d6fa937b34d0f8b5a3551ee8b90f497fd308734bd43c44fe6b4-filelists.sqlite.bz2: [Errno 14] curl#18 - "transfer closed with 2607162 bytes remaining to read"\nTrying other mirror.\n\n\n One of the configured repositories failed (Extra Packages for Enterprise Linux),\n and yum doesn\'t have enough cached data to continue. At this point the only\n safe thing yum can do is fail. There are a few ways to work "fix" this:\n\n 1. Contact the upstream for the repository and get them to fix the problem.\n\n 2. Reconfigure the baseurl/etc. for the repository, to point to a working\n upstream. This is most often useful if you are using a newer\n distribution release than is supported by the repository (and the\n packages for the previous distribution release still work).\n\n 3. Run the command with the repository temporarily disabled\n yum --disablerepo=epel ...\n\n 4. Disable the repository permanently, so yum won\'t use it by default. Yum\n will then just ignore the repository until you permanently enable it\n again or use --enablerepo for temporary usage:\n\n yum-config-manager --disable epel\n or\n subscription-manager repos --disable=epel\n\n 5. Configure the failing repository to be skipped, if it is unavailable.\n Note that yum will try to contact the repo. when it runs most commands,\n so will have to try and fail each time (and thus. yum will be be much\n slower). If it is a very temporary problem though, this is often a nice\n compromise:\n\n yum-config-manager --save --setopt=epel.skip_if_unavailable=true\n\nfailure: repodata/d0878257236e2d6fa937b34d0f8b5a3551ee8b90f497fd308734bd43c44fe6b4-filelists.sqlite.bz2 from epel: [Errno 256] No more mirrors to try.\nhttp://dl.fedoraproject.org/pub/epel/7/x86_64/repodata/d0878257236e2d6fa937b34d0f8b5a3551ee8b90f497fd308734bd43c44fe6b4-filelists.sqlite.bz2: [Errno 12] Timeout on http://dl.fedoraproject.org/pub/epel/7/x86_64/repodata/d0878257236e2d6fa937b34d0f8b5a3551ee8b90f497fd308734bd43c44fe6b4-filelists.sqlite.bz2: (28, \'Operation too slow. Less than 1000 bytes/sec transferred the last 300 seconds\')\nhttp://dl.fedoraproject.org/pub/epel/7/x86_64/repodata/d0878257236e2d6fa937b34d0f8b5a3551ee8b90f497fd308734bd43c44fe6b4-filelists.sqlite.bz2: [Errno 14] curl#18 - "transfer closed with 2607162 bytes remaining to read"', u'stdout': u'Loaded plugins: fastestmirror, langpacks, priorities\nCleaning repos: base centos7-fcgi-ceph epel extras lab-extras updates\nCleaning up everything\nMaybe you want: rm -rf /var/cache/yum, to also free up space taken by orphaned data from disabled or removed repos\nLoaded plugins: fastestmirror, langpacks, priorities\nDetermining fastest mirrors\n * base: packages.oit.ncsu.edu\n * extras: repos-va.psychz.net\n * updates: mirror.math.princeton.edu', u'msg': u'non-zero return code', u'delta': u'0:38:19.630644', 'stdout_lines': [u'Loaded plugins: fastestmirror, langpacks, priorities', u'Cleaning repos: base centos7-fcgi-ceph epel extras lab-extras updates', u'Cleaning up everything', u'Maybe you want: rm -rf /var/cache/yum, to also free up space taken by orphaned data from disabled or removed repos', u'Loaded plugins: fastestmirror, langpacks, priorities', u'Determining fastest mirrors', u' * base: packages.oit.ncsu.edu', u' * extras: repos-va.psychz.net', u' * updates: mirror.math.princeton.edu'], u'end': u'2020-01-24 15:42:53.261206', '_ansible_no_log': False, u'cmd': u'rm -rf /var/cache/yum/*; yum clean all; yum makecache', u'start': u'2020-01-24 15:04:33.630562', u'warnings': [u"Consider using the file module with state=absent rather than running 'rm'. If you need to use command because file is insufficient you can add 'warn: false' to this command task or set 'command_warnings=False' in ansible.cfg to get rid of this message."], u'rc': 1, u'invocation': {u'module_args': {u'warn': True, u'executable': None, u'_uses_shell': True, u'strip_empty_ends': True, u'_raw_params': u'rm -rf /var/cache/yum/*; yum clean all; yum makecache', u'removes': None, u'argv': None, u'creates': None, u'chdir': None, u'stdin_add_newline': True, u'stdin': None}}}}Traceback (most recent call last): File "/home/teuthworker/src/git.ceph.com_git_ceph-cm-ansible_master/callback_plugins/failure_log.py", line 44, in log_failure log.error(yaml.safe_dump(failure)) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/__init__.py", line 309, in safe_dump return dump_all([data], stream, Dumper=SafeDumper, **kwds) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/__init__.py", line 281, in dump_all dumper.represent(data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 29, in represent node = self.represent_data(data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 58, in represent_data node = self.yaml_representers[data_types[0]](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 227, in represent_dict return self.represent_mapping(u'tag:yaml.org,2002:map', data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 125, in represent_mapping node_value = self.represent_data(item_value) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 58, in represent_data node = self.yaml_representers[data_types[0]](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 227, in represent_dict return self.represent_mapping(u'tag:yaml.org,2002:map', data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 125, in represent_mapping node_value = self.represent_data(item_value) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 68, in represent_data node = self.yaml_representers[None](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 251, in represent_undefined raise RepresenterError("cannot represent an object", data)RepresenterError: ('cannot represent an object', u'rm -rf /var/cache/yum/*; yum clean all; yum makecache') |
||||||||||||||
pass | 4700015 | 2020-01-24 05:56:02 | 2020-01-24 13:19:05 | 2020-01-24 14:45:06 | 1:26:01 | 0:25:11 | 1:00:50 | mira | master | ubuntu | 18.04 | ceph-deploy/{cluster/4node.yaml config/ceph_volume_bluestore.yaml distros/ubuntu_latest.yaml python_versions/python_3.yaml tasks/rbd_import_export.yaml} | 4 | |
fail | 4700014 | 2020-01-24 05:56:02 | 2020-01-24 13:16:20 | 2020-01-24 14:08:21 | 0:52:01 | 0:34:35 | 0:17:26 | mira | master | centos | 7.6 | ceph-deploy/{cluster/4node.yaml config/ceph_volume_filestore.yaml distros/centos_latest.yaml python_versions/python_2.yaml tasks/rbd_import_export.yaml} | 4 | |
Failure Reason:
SELinux denials found on ubuntu@mira063.front.sepia.ceph.com: ['type=AVC msg=audit(1579873951.921:5510): avc: denied { search } for pid=2204 comm="ceph-mgr" name="httpd" dev="sda1" ino=82020 scontext=system_u:system_r:ceph_t:s0 tcontext=system_u:object_r:httpd_config_t:s0 tclass=dir permissive=1'] |
||||||||||||||
fail | 4700010 | 2020-01-24 05:55:59 | 2020-01-24 11:42:26 | 2020-01-24 12:44:35 | 1:02:09 | 0:33:09 | 0:29:00 | mira | master | centos | 7.6 | ceph-deploy/{cluster/4node.yaml config/ceph_volume_filestore.yaml distros/centos_latest.yaml python_versions/python_3.yaml tasks/ceph-admin-commands.yaml} | 4 | |
Failure Reason:
SELinux denials found on ubuntu@mira101.front.sepia.ceph.com: ['type=AVC msg=audit(1579868941.892:6118): avc: denied { open } for pid=4398 comm="fn_anonymous" path="/run/udev/data/b8:16" dev="tmpfs" ino=111388 scontext=system_u:system_r:ceph_t:s0 tcontext=system_u:object_r:udev_var_run_t:s0 tclass=file permissive=1', 'type=AVC msg=audit(1579868963.129:6177): avc: denied { open } for pid=5060 comm="fn_anonymous" path="/run/udev/data/b8:16" dev="tmpfs" ino=120762 scontext=system_u:system_r:ceph_t:s0 tcontext=system_u:object_r:udev_var_run_t:s0 tclass=file permissive=1', 'type=AVC msg=audit(1579868771.545:5791): avc: denied { getattr } for pid=2975 comm="ms_dispatch" path="/proc/kcore" dev="proc" ino=4026532068 scontext=system_u:system_r:ceph_t:s0 tcontext=system_u:object_r:proc_kcore_t:s0 tclass=file permissive=1', 'type=AVC msg=audit(1579868963.130:6178): avc: denied { getattr } for pid=5060 comm="fn_anonymous" path="/run/udev/data/b8:16" dev="tmpfs" ino=120762 scontext=system_u:system_r:ceph_t:s0 tcontext=system_u:object_r:udev_var_run_t:s0 tclass=file permissive=1', 'type=AVC msg=audit(1579868941.892:6119): avc: denied { getattr } for pid=4398 comm="fn_anonymous" path="/run/udev/data/b8:16" dev="tmpfs" ino=111388 scontext=system_u:system_r:ceph_t:s0 tcontext=system_u:object_r:udev_var_run_t:s0 tclass=file permissive=1', 'type=AVC msg=audit(1579868941.892:6118): avc: denied { read } for pid=4398 comm="fn_anonymous" name="b8:16" dev="tmpfs" ino=111388 scontext=system_u:system_r:ceph_t:s0 tcontext=system_u:object_r:udev_var_run_t:s0 tclass=file permissive=1', 'type=AVC msg=audit(1579868963.129:6177): avc: denied { read } for pid=5060 comm="fn_anonymous" name="b8:16" dev="tmpfs" ino=120762 scontext=system_u:system_r:ceph_t:s0 tcontext=system_u:object_r:udev_var_run_t:s0 tclass=file permissive=1'] |
||||||||||||||
pass | 4700005 | 2020-01-24 05:55:56 | 2020-01-24 10:00:59 | 2020-01-24 13:19:03 | 3:18:04 | 0:23:26 | 2:54:38 | mira | master | ubuntu | 18.04 | ceph-deploy/{cluster/4node.yaml config/ceph_volume_dmcrypt_off.yaml distros/ubuntu_latest.yaml python_versions/python_2.yaml tasks/ceph-admin-commands.yaml} | 4 | |
pass | 4700002 | 2020-01-24 05:55:54 | 2020-01-24 09:28:23 | 2020-01-24 10:22:23 | 0:54:00 | 0:24:40 | 0:29:20 | mira | master | ubuntu | 18.04 | ceph-deploy/{cluster/4node.yaml config/ceph_volume_filestore.yaml distros/ubuntu_latest.yaml python_versions/python_2.yaml tasks/ceph-admin-commands.yaml} | 4 | |
fail | 4700001 | 2020-01-24 05:55:53 | 2020-01-24 09:26:32 | 2020-01-24 11:12:40 | 1:46:08 | 0:32:03 | 1:14:05 | mira | master | centos | 7.6 | ceph-deploy/{cluster/4node.yaml config/ceph_volume_dmcrypt_off.yaml distros/centos_latest.yaml python_versions/python_3.yaml tasks/rbd_import_export.yaml} | 4 | |
Failure Reason:
SELinux denials found on ubuntu@mira038.front.sepia.ceph.com: ['type=AVC msg=audit(1579863319.740:5751): avc: denied { getattr } for pid=5479 comm="fn_anonymous" path="/run/udev/data/b8:48" dev="tmpfs" ino=115140 scontext=system_u:system_r:ceph_t:s0 tcontext=system_u:object_r:udev_var_run_t:s0 tclass=file permissive=1', 'type=AVC msg=audit(1579863306.846:5737): avc: denied { read } for pid=4919 comm="fn_anonymous" name="b8:16" dev="tmpfs" ino=115713 scontext=system_u:system_r:ceph_t:s0 tcontext=system_u:object_r:udev_var_run_t:s0 tclass=file permissive=1', 'type=AVC msg=audit(1579863319.740:5750): avc: denied { open } for pid=5479 comm="fn_anonymous" path="/run/udev/data/b8:48" dev="tmpfs" ino=115140 scontext=system_u:system_r:ceph_t:s0 tcontext=system_u:object_r:udev_var_run_t:s0 tclass=file permissive=1', 'type=AVC msg=audit(1579863306.847:5738): avc: denied { getattr } for pid=4919 comm="fn_anonymous" path="/run/udev/data/b8:16" dev="tmpfs" ino=115713 scontext=system_u:system_r:ceph_t:s0 tcontext=system_u:object_r:udev_var_run_t:s0 tclass=file permissive=1', 'type=AVC msg=audit(1579863306.846:5737): avc: denied { open } for pid=4919 comm="fn_anonymous" path="/run/udev/data/b8:16" dev="tmpfs" ino=115713 scontext=system_u:system_r:ceph_t:s0 tcontext=system_u:object_r:udev_var_run_t:s0 tclass=file permissive=1', 'type=AVC msg=audit(1579863319.740:5750): avc: denied { read } for pid=5479 comm="fn_anonymous" name="b8:48" dev="tmpfs" ino=115140 scontext=system_u:system_r:ceph_t:s0 tcontext=system_u:object_r:udev_var_run_t:s0 tclass=file permissive=1', 'type=AVC msg=audit(1579863272.209:5643): avc: denied { getattr } for pid=3716 comm="ms_dispatch" path="/proc/kcore" dev="proc" ino=4026532068 scontext=system_u:system_r:ceph_t:s0 tcontext=system_u:object_r:proc_kcore_t:s0 tclass=file permissive=1'] |
||||||||||||||
pass | 4700000 | 2020-01-24 05:55:52 | 2020-01-24 09:26:32 | 2020-01-24 11:46:40 | 2:20:08 | 0:23:48 | 1:56:20 | mira | master | ubuntu | 18.04 | ceph-deploy/{cluster/4node.yaml config/ceph_volume_bluestore_dmcrypt.yaml distros/ubuntu_latest.yaml python_versions/python_2.yaml tasks/ceph-admin-commands.yaml} | 4 | |
fail | 4699997 | 2020-01-24 05:55:50 | 2020-01-24 08:03:57 | 2020-01-24 09:48:00 | 1:44:03 | 0:31:46 | 1:12:17 | mira | master | centos | 7.6 | ceph-deploy/{cluster/4node.yaml config/ceph_volume_dmcrypt_off.yaml distros/centos_latest.yaml python_versions/python_3.yaml tasks/ceph-admin-commands.yaml} | 4 | |
Failure Reason:
SELinux denials found on ubuntu@mira038.front.sepia.ceph.com: ['type=AVC msg=audit(1579858253.559:5761): avc: denied { getattr } for pid=5528 comm="fn_anonymous" path="/run/udev/data/b8:48" dev="tmpfs" ino=113364 scontext=system_u:system_r:ceph_t:s0 tcontext=system_u:object_r:udev_var_run_t:s0 tclass=file permissive=1', 'type=AVC msg=audit(1579858211.705:5657): avc: denied { getattr } for pid=3752 comm="ms_dispatch" path="/proc/kcore" dev="proc" ino=4026532068 scontext=system_u:system_r:ceph_t:s0 tcontext=system_u:object_r:proc_kcore_t:s0 tclass=file permissive=1', 'type=AVC msg=audit(1579858240.680:5747): avc: denied { read } for pid=4976 comm="fn_anonymous" name="b8:16" dev="tmpfs" ino=113148 scontext=system_u:system_r:ceph_t:s0 tcontext=system_u:object_r:udev_var_run_t:s0 tclass=file permissive=1', 'type=AVC msg=audit(1579858253.558:5760): avc: denied { read } for pid=5528 comm="fn_anonymous" name="b8:48" dev="tmpfs" ino=113364 scontext=system_u:system_r:ceph_t:s0 tcontext=system_u:object_r:udev_var_run_t:s0 tclass=file permissive=1', 'type=AVC msg=audit(1579858240.680:5748): avc: denied { getattr } for pid=4976 comm="fn_anonymous" path="/run/udev/data/b8:16" dev="tmpfs" ino=113148 scontext=system_u:system_r:ceph_t:s0 tcontext=system_u:object_r:udev_var_run_t:s0 tclass=file permissive=1', 'type=AVC msg=audit(1579858200.440:5645): avc: denied { getattr } for pid=3752 comm="ms_dispatch" path="/proc/kcore" dev="proc" ino=4026532068 scontext=system_u:system_r:ceph_t:s0 tcontext=system_u:object_r:proc_kcore_t:s0 tclass=file permissive=1', 'type=AVC msg=audit(1579858240.680:5747): avc: denied { open } for pid=4976 comm="fn_anonymous" path="/run/udev/data/b8:16" dev="tmpfs" ino=113148 scontext=system_u:system_r:ceph_t:s0 tcontext=system_u:object_r:udev_var_run_t:s0 tclass=file permissive=1', 'type=AVC msg=audit(1579858253.558:5760): avc: denied { open } for pid=5528 comm="fn_anonymous" path="/run/udev/data/b8:48" dev="tmpfs" ino=113364 scontext=system_u:system_r:ceph_t:s0 tcontext=system_u:object_r:udev_var_run_t:s0 tclass=file permissive=1'] |
||||||||||||||
pass | 4699992 | 2020-01-24 05:55:47 | 2020-01-24 06:49:33 | 2020-01-24 08:03:34 | 1:14:01 | 0:24:16 | 0:49:45 | mira | master | ubuntu | 18.04 | ceph-deploy/{cluster/4node.yaml config/ceph_volume_bluestore_dmcrypt.yaml distros/ubuntu_latest.yaml python_versions/python_3.yaml tasks/ceph-admin-commands.yaml} | 4 | |
fail | 4699989 | 2020-01-24 05:55:45 | 2020-01-24 06:16:29 | 2020-01-24 08:52:32 | 2:36:03 | 0:32:16 | 2:03:47 | mira | master | centos | 7.6 | ceph-deploy/{cluster/4node.yaml config/ceph_volume_dmcrypt_off.yaml distros/centos_latest.yaml python_versions/python_2.yaml tasks/ceph-admin-commands.yaml} | 4 | |
Failure Reason:
SELinux denials found on ubuntu@mira038.front.sepia.ceph.com: ['type=AVC msg=audit(1579854932.700:5789): avc: denied { read } for pid=6193 comm="fn_anonymous" name="b8:48" dev="tmpfs" ino=107416 scontext=system_u:system_r:ceph_t:s0 tcontext=system_u:object_r:udev_var_run_t:s0 tclass=file permissive=1', 'type=AVC msg=audit(1579854919.677:5777): avc: denied { getattr } for pid=5618 comm="fn_anonymous" path="/run/udev/data/b8:16" dev="tmpfs" ino=114054 scontext=system_u:system_r:ceph_t:s0 tcontext=system_u:object_r:udev_var_run_t:s0 tclass=file permissive=1', 'type=AVC msg=audit(1579854932.700:5789): avc: denied { open } for pid=6193 comm="fn_anonymous" path="/run/udev/data/b8:48" dev="tmpfs" ino=107416 scontext=system_u:system_r:ceph_t:s0 tcontext=system_u:object_r:udev_var_run_t:s0 tclass=file permissive=1', 'type=AVC msg=audit(1579854932.700:5790): avc: denied { getattr } for pid=6193 comm="fn_anonymous" path="/run/udev/data/b8:48" dev="tmpfs" ino=107416 scontext=system_u:system_r:ceph_t:s0 tcontext=system_u:object_r:udev_var_run_t:s0 tclass=file permissive=1', 'type=AVC msg=audit(1579854919.677:5776): avc: denied { open } for pid=5618 comm="fn_anonymous" path="/run/udev/data/b8:16" dev="tmpfs" ino=114054 scontext=system_u:system_r:ceph_t:s0 tcontext=system_u:object_r:udev_var_run_t:s0 tclass=file permissive=1', 'type=AVC msg=audit(1579854919.677:5776): avc: denied { read } for pid=5618 comm="fn_anonymous" name="b8:16" dev="tmpfs" ino=114054 scontext=system_u:system_r:ceph_t:s0 tcontext=system_u:object_r:udev_var_run_t:s0 tclass=file permissive=1', 'type=AVC msg=audit(1579854884.572:5682): avc: denied { getattr } for pid=4397 comm="ms_dispatch" path="/proc/kcore" dev="proc" ino=4026532068 scontext=system_u:system_r:ceph_t:s0 tcontext=system_u:object_r:proc_kcore_t:s0 tclass=file permissive=1'] |
||||||||||||||
pass | 4699988 | 2020-01-24 05:55:44 | 2020-01-24 06:09:52 | 2020-01-24 07:27:53 | 1:18:01 | 0:24:15 | 0:53:46 | mira | master | ubuntu | 18.04 | ceph-deploy/{cluster/4node.yaml config/ceph_volume_bluestore_dmcrypt.yaml distros/ubuntu_latest.yaml python_versions/python_3.yaml tasks/rbd_import_export.yaml} | 4 | |
fail | 4692919 | 2020-01-22 05:11:42 | 2020-01-24 05:38:20 | 2020-01-24 06:16:20 | 0:38:00 | 0:24:05 | 0:13:55 | mira | master | centos | 7.4 | ceph-disk/basic/{distros/centos_latest.yaml tasks/ceph-disk.yaml} | 2 | |
Failure Reason:
Command failed (workunit test ceph-disk/ceph-disk.sh) on mira038 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=e1c5fd9306ce839d8d3ac8160c57aba42e0231a8 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/ceph-disk/ceph-disk.sh' |