User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Pass | Fail | Dead |
---|---|---|---|---|---|---|---|---|---|---|---|
teuthology | 2020-05-21 23:17:45 | 2020-05-25 05:48:31 | 2020-05-25 18:18:58 | 12:30:27 | rados | master | smithi | 617298c | 244 | 62 | 6 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
fail | 5079792 | 2020-05-21 23:18:21 | 2020-05-25 04:52:38 | 2020-05-25 05:16:38 | 0:24:00 | 0:12:41 | 0:11:19 | smithi | master | ubuntu | 18.04 | rados/cephadm/smoke/{distro/ubuntu_latest.yaml fixed-2.yaml start.yaml} | 2 | |
Failure Reason:
'/home/ubuntu/cephtest/archive/syslog/misc.log:2020-05-25T05:13:19.387032+00:00 smithi119 bash[10356]: debug 2020-05-25T05:13:19.381+0000 7fa4e88a6700 -1 log_channel(cephadm) log [ERR] : cephadm exited with an error code: 1, stderr:INFO:cephadm:Deploy daemon prometheus.a ... ' in syslog |
||||||||||||||
pass | 5079793 | 2020-05-21 23:18:22 | 2020-05-25 04:56:33 | 2020-05-25 05:18:33 | 0:22:00 | 0:11:54 | 0:10:06 | smithi | master | ubuntu | 18.04 | rados/perf/{ceph.yaml objectstore/bluestore-basic-min-osd-mem-target.yaml openstack.yaml settings/optimized.yaml ubuntu_latest.yaml workloads/radosbench_4K_rand_read.yaml} | 1 | |
pass | 5079794 | 2020-05-21 23:18:23 | 2020-05-25 04:58:29 | 2020-05-25 05:32:29 | 0:34:00 | 0:27:53 | 0:06:07 | smithi | master | centos | 8.1 | rados/thrash-erasure-code-isa/{arch/x86_64.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/osd-delay.yaml objectstore/bluestore-comp-snappy.yaml rados.yaml recovery-overrides/{more-async-partial-recovery.yaml} supported-random-distro$/{centos_8.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/ec-rados-plugin=isa-k=2-m=1.yaml} | 2 | |
pass | 5079795 | 2020-05-21 23:18:24 | 2020-05-25 05:00:18 | 2020-05-25 05:16:18 | 0:16:00 | 0:09:24 | 0:06:36 | smithi | master | centos | 8.1 | rados/singleton/{all/divergent_priors.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{centos_8.yaml}} | 1 | |
pass | 5079796 | 2020-05-21 23:18:25 | 2020-05-25 05:00:27 | 2020-05-25 05:28:26 | 0:27:59 | 0:21:15 | 0:06:44 | smithi | master | centos | 8.1 | rados/thrash-erasure-code/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/fast.yaml msgr-failures/osd-delay.yaml objectstore/bluestore-comp-snappy.yaml rados.yaml recovery-overrides/{more-async-recovery.yaml} supported-random-distro$/{centos_8.yaml} thrashers/fastread.yaml thrashosds-health.yaml workloads/ec-small-objects-balanced.yaml} | 2 | |
fail | 5079797 | 2020-05-21 23:18:26 | 2020-05-25 05:48:31 | 2020-05-25 06:16:31 | 0:28:00 | 0:12:30 | 0:15:30 | smithi | master | ubuntu | 18.04 | rados/cephadm/smoke-roleless/{distro/ubuntu_latest.yaml start.yaml} | 2 | |
Failure Reason:
'/home/ubuntu/cephtest/archive/syslog/misc.log:2020-05-25T06:07:01.892811+00:00 smithi105 bash[10470]: debug 2020-05-25T06:07:01.887+0000 7f996ecda700 -1 log_channel(cephadm) log [ERR] : cephadm exited with an error code: 1, stderr:INFO:cephadm:Deploy daemon alertmanager.smithi105 ... ' in syslog |
||||||||||||||
pass | 5079798 | 2020-05-21 23:18:27 | 2020-05-25 05:48:31 | 2020-05-25 06:08:31 | 0:20:00 | 0:13:26 | 0:06:34 | smithi | master | rhel | 8.1 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-comp-zstd.yaml rados.yaml supported-random-distro$/{rhel_8.yaml} thrashers/none.yaml thrashosds-health.yaml workloads/dedup_tier.yaml} | 2 | |
pass | 5079799 | 2020-05-21 23:18:28 | 2020-05-25 05:48:31 | 2020-05-25 06:20:31 | 0:32:00 | 0:10:11 | 0:21:49 | smithi | master | centos | 8.1 | rados/monthrash/{ceph.yaml clusters/3-mons.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore-comp-snappy.yaml rados.yaml supported-random-distro$/{centos_8.yaml} thrashers/force-sync-many.yaml workloads/rados_5925.yaml} | 2 | |
dead | 5079800 | 2020-05-21 23:18:29 | 2020-05-25 05:50:30 | 2020-05-25 06:20:30 | 0:30:00 | 0:02:43 | 0:27:17 | smithi | master | centos | 7.6 | rados/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-install/mimic.yaml backoff/peering.yaml ceph.yaml clusters/{openstack.yaml three-plus-one.yaml} d-balancer/on.yaml distro$/{centos_7.6.yaml} msgr-failures/few.yaml rados.yaml thrashers/default.yaml thrashosds-health.yaml workloads/test_rbd_api.yaml} | 3 | |
Failure Reason:
o be skipped, if it is unavailable.\n Note that yum will try to contact the repo. when it runs most commands,\n so will have to try and fail each time (and thus. yum will be be much\n slower). If it is a very temporary problem though, this is often a nice\n compromise:\n\n yum-config-manager --save --setopt=epel.skip_if_unavailable=true\n\nfailure: repodata/d69bfed263e53ff827c9b958f51fbe34691275b99eda24ed4817e6a28a0a7445-primary.sqlite.bz2 from epel: [Errno 256] No more mirrors to try.\nhttp://ftp.linux.ncsu.edu/pub/epel/7/x86_64/repodata/d69bfed263e53ff827c9b958f51fbe34691275b99eda24ed4817e6a28a0a7445-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found\nhttp://mirror.oss.ou.edu/epel/7/x86_64/repodata/d69bfed263e53ff827c9b958f51fbe34691275b99eda24ed4817e6a28a0a7445-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found\nhttp://mirror.pnl.gov/epel/7/x86_64/repodata/d69bfed263e53ff827c9b958f51fbe34691275b99eda24ed4817e6a28a0a7445-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found\nhttp://mirrors.cat.pdx.edu/epel/7/x86_64/repodata/d69bfed263e53ff827c9b958f51fbe34691275b99eda24ed4817e6a28a0a7445-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found\nhttp://download-cc-rdu01.fedoraproject.org/pub/epel/7/x86_64/repodata/d69bfed263e53ff827c9b958f51fbe34691275b99eda24ed4817e6a28a0a7445-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found\nhttp://download-ib01.fedoraproject.org/pub/epel/7/x86_64/repodata/d69bfed263e53ff827c9b958f51fbe34691275b99eda24ed4817e6a28a0a7445-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found\n', '_ansible_no_log': False}}Traceback (most recent call last): File "/home/teuthworker/src/git.ceph.com_git_ceph-cm-ansible_master/callback_plugins/failure_log.py", line 44, in log_failure log.error(yaml.safe_dump(failure)) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/__init__.py", line 306, in safe_dump return dump_all([data], stream, Dumper=SafeDumper, **kwds) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/__init__.py", line 278, in dump_all dumper.represent(data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 27, in represent node = self.represent_data(data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 48, in represent_data node = self.yaml_representers[data_types[0]](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 207, in represent_dict return self.represent_mapping('tag:yaml.org,2002:map', data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 118, in represent_mapping node_value = self.represent_data(item_value) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 48, in represent_data node = self.yaml_representers[data_types[0]](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 207, in represent_dict return self.represent_mapping('tag:yaml.org,2002:map', data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 118, in represent_mapping node_value = self.represent_data(item_value) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 48, in represent_data node = self.yaml_representers[data_types[0]](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 207, in represent_dict return self.represent_mapping('tag:yaml.org,2002:map', data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 118, in represent_mapping node_value = self.represent_data(item_value) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 48, in represent_data node = self.yaml_representers[data_types[0]](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 207, in represent_dict return self.represent_mapping('tag:yaml.org,2002:map', data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 118, in represent_mapping node_value = self.represent_data(item_value) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 48, in represent_data node = self.yaml_representers[data_types[0]](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 199, in represent_list return self.represent_sequence('tag:yaml.org,2002:seq', data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 92, in represent_sequence node_item = self.represent_data(item) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 58, in represent_data node = self.yaml_representers[None](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 231, in represent_undefined raise RepresenterError("cannot represent an object", data)yaml.representer.RepresenterError: ('cannot represent an object', 'epel')Failure object was: {'smithi185.front.sepia.ceph.com': {'changed': False, 'results': [], 'rc': 1, 'invocation': {'module_args': {'lock_timeout': 0, 'update_cache': False, 'disable_excludes': None, 'exclude': [], 'allow_downgrade': False, 'disable_gpg_check': False, 'conf_file': None, 'use_backend': 'auto', 'state': 'latest', 'disablerepo': [], 'releasever': None, 'skip_broken': False, 'autoremove': False, 'download_dir': None, 'enable_plugin': [], 'installroot': '/', 'install_weak_deps': True, 'name': ['nagios-common', 'nrpe', 'nagios-plugins', 'nagios-plugins-load'], 'download_only': False, 'bugfix': False, 'list': None, 'install_repoquery': True, 'update_only': False, 'disable_plugin': [], 'enablerepo': ['epel'], 'security': False, 'validate_certs': True}}, 'msg': 'http://ftp.linux.ncsu.edu/pub/epel/7/x86_64/repodata/2a6554f8a72afcf0952727cbb8679d135408b0b3b630497cf9d321134cd85859-comps-Everything.x86_64.xml.gz: [Errno 14] HTTP Error 404 - Not Found\nTrying other mirror.\nTo address this issue please refer to the below wiki article \n\nhttps://wiki.centos.org/yum-errors\n\nIf above article doesn\'t help to resolve this issue please use https://bugs.centos.org/.\n\nhttp://ftp.linux.ncsu.edu/pub/epel/7/x86_64/repodata/9de8acce70ccee52b3b4adaf87c0b4233fe3d06a0bb0145aec34da6cf5659574-updateinfo.xml.bz2: [Errno 14] HTTP Error 404 - Not Found\nTrying other mirror.\nhttp://ftp.linux.ncsu.edu/pub/epel/7/x86_64/repodata/d69bfed263e53ff827c9b958f51fbe34691275b99eda24ed4817e6a28a0a7445-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found\nTrying other mirror.\nhttp://mirror.oss.ou.edu/epel/7/x86_64/repodata/2a6554f8a72afcf0952727cbb8679d135408b0b3b630497cf9d321134cd85859-comps-Everything.x86_64.xml.gz: [Errno 14] HTTP Error 404 - Not Found\nTrying other mirror.\nhttp://mirror.oss.ou.edu/epel/7/x86_64/repodata/9de8acce70ccee52b3b4adaf87c0b4233fe3d06a0bb0145aec34da6cf5659574-updateinfo.xml.bz2: [Errno 14] HTTP Error 404 - Not Found\nTrying other mirror.\nhttp://download-cc-rdu01.fedoraproject.org/pub/epel/7/x86_64/repodata/2a6554f8a72afcf0952727cbb8679d135408b0b3b630497cf9d321134cd85859-comps-Everything.x86_64.xml.gz: [Errno 14] HTTP Error 404 - Not Found\nTrying other mirror.\nhttp://mirror.oss.ou.edu/epel/7/x86_64/repodata/d69bfed263e53ff827c9b958f51fbe34691275b99eda24ed4817e6a28a0a7445-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found\nTrying other mirror.\nhttp://download-ib01.fedoraproject.org/pub/epel/7/x86_64/repodata/9de8acce70ccee52b3b4adaf87c0b4233fe3d06a0bb0145aec34da6cf5659574-updateinfo.xml.bz2: [Errno 14] HTTP Error 404 - Not Found\nTrying other mirror.\nhttp://mirrors.cat.pdx.edu/epel/7/x86_64/repodata/2a6554f8a72afcf0952727cbb8679d135408b0b3b630497cf9d321134cd85859-comps-Everything.x86_64.xml.gz: [Errno 14] HTTP Error 404 - Not Found\nTrying other mirror.\nhttp://mirror.pnl.gov/epel/7/x86_64/repodata/d69bfed263e53ff827c9b958f51fbe34691275b99eda24ed4817e6a28a0a7445-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found\nTrying other mirror.\nhttp://download-cc-rdu01.fedoraproject.org/pub/epel/7/x86_64/repodata/d69bfed263e53ff827c9b958f51fbe34691275b99eda24ed4817e6a28a0a7445-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found\nTrying other mirror.\nhttp://mirrors.cat.pdx.edu/epel/7/x86_64/repodata/9de8acce70ccee52b3b4adaf87c0b4233fe3d06a0bb0145aec34da6cf5659574-updateinfo.xml.bz2: [Errno 14] HTTP Error 404 - Not Found\nTrying other mirror.\nhttp://download-cc-rdu01.fedoraproject.org/pub/epel/7/x86_64/repodata/9de8acce70ccee52b3b4adaf87c0b4233fe3d06a0bb0145aec34da6cf5659574-updateinfo.xml.bz2: [Errno 14] HTTP Error 404 - Not Found\nTrying other mirror.\nhttp://download-ib01.fedoraproject.org/pub/epel/7/x86_64/repodata/d69bfed263e53ff827c9b958f51fbe34691275b99eda24ed4817e6a28a0a7445-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found\nTrying other mirror.\nhttp://mirror.pnl.gov/epel/7/x86_64/repodata/2a6554f8a72afcf0952727cbb8679d135408b0b3b630497cf9d321134cd85859-comps-Everything.x86_64.xml.gz: [Errno 14] HTTP Error 404 - Not Found\nTrying other mirror.\nhttp://download-ib01.fedoraproject.org/pub/epel/7/x86_64/repodata/2a6554f8a72afcf0952727cbb8679d135408b0b3b630497cf9d321134cd85859-comps-Everything.x86_64.xml.gz: [Errno 14] HTTP Error 404 - Not Found\nTrying other mirror.\nhttp://mirrors.cat.pdx.edu/epel/7/x86_64/repodata/d69bfed263e53ff827c9b958f51fbe34691275b99eda24ed4817e6a28a0a7445-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found\nTrying other mirror.\nhttp://mirror.pnl.gov/epel/7/x86_64/repodata/9de8acce70ccee52b3b4adaf87c0b4233fe3d06a0bb0145aec34da6cf5659574-updateinfo.xml.bz2: [Errno 14] HTTP Error 404 - Not Found\nTrying other mirror.\nhttp://ftp.linux.ncsu.edu/pub/epel/7/x86_64/repodata/d69bfed263e53ff827c9b958f51fbe34691275b99eda24ed4817e6a28a0a7445-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found\nTrying other mirror.\nhttp://mirror.oss.ou.edu/epel/7/x86_64/repodata/d69bfed263e53ff827c9b958f51fbe34691275b99eda24ed4817e6a28a0a7445-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found\nTrying other mirror.\nhttp://download-cc-rdu01.fedoraproject.org/pub/epel/7/x86_64/repodata/d69bfed263e53ff827c9b958f51fbe34691275b99eda24ed4817e6a28a0a7445-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found\nTrying other mirror.\nhttp://download-ib01.fedoraproject.org/pub/epel/7/x86_64/repodata/d69bfed263e53ff827c9b958f51fbe34691275b99eda24ed4817e6a28a0a7445-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found\nTrying other mirror.\nhttp://mirrors.cat.pdx.edu/epel/7/x86_64/repodata/d69bfed263e53ff827c9b958f51fbe34691275b99eda24ed4817e6a28a0a7445-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found\nTrying other mirror.\nhttp://mirror.pnl.gov/epel/7/x86_64/repodata/d69bfed263e53ff827c9b958f51fbe34691275b99eda24ed4817e6a28a0a7445-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found\nTrying other mirror.\nhttp://ftp.linux.ncsu.edu/pub/epel/7/x86_64/repodata/d69bfed263e53ff827c9b958f51fbe34691275b99eda24ed4817e6a28a0a7445-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found\nTrying other mirror.\nhttp://mirror.oss.ou.edu/epel/7/x86_64/repodata/d69bfed263e53ff827c9b958f51fbe34691275b99eda24ed4817e6a28a0a7445-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found\nTrying other mirror.\nhttp://download-ib01.fedoraproject.org/pub/epel/7/x86_64/repodata/d69bfed263e53ff827c9b958f51fbe34691275b99eda24ed4817e6a28a0a7445-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found\nTrying other mirror.\nhttp://download-cc-rdu01.fedoraproject.org/pub/epel/7/x86_64/repodata/d69bfed263e53ff827c9b958f51fbe34691275b99eda24ed4817e6a28a0a7445-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found\nTrying other mirror.\nhttp://mirror.pnl.gov/epel/7/x86_64/repodata/d69bfed263e53ff827c9b958f51fbe34691275b99eda24ed4817e6a28a0a7445-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found\nTrying other mirror.\nhttp://mirrors.cat.pdx.edu/epel/7/x86_64/repodata/d69bfed263e53ff827c9b958f51fbe34691275b99eda24ed4817e6a28a0a7445-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found\nTrying other mirror.\n\n\n One of the configured repositories failed (Extra Packages for Enterprise Linux),\n and yum doesn\'t have enough cached data to continue. At this point the only\n safe thing yum can do is fail. There are a few ways to work "fix" this:\n\n 1. Contact the upstream for the repository and get them to fix the problem.\n\n 2. Reconfigure the baseurl/etc. for the repository, to point to a working\n upstream. This is most often useful if you are using a newer\n distribution release than is supported by the repository (and the\n packages for the previous distribution release still work).\n\n 3. Run the command with the repository temporarily disabled\n yum --disablerepo=epel ...\n\n 4. Disable the repository permanently, so yum won\'t use it by default. Yum\n will then just ignore the repository until you permanently enable it\n again or use --enablerepo for temporary usage:\n\n yum-config-manager --disable epel\n or\n subscription-manager repos --disable=epel\n\n 5. Configure the failing repository to be skipped, if it is unavailable.\n Note that yum will try to contact the repo. when it runs most commands,\n so will have to try and fail each time (and thus. yum will be be much\n slower). If it is a very temporary problem though, this is often a nice\n compromise:\n\n yum-config-manager --save --setopt=epel.skip_if_unavailable=true\n\nfailure: repodata/d69bfed263e53ff827c9b958f51fbe34691275b99eda24ed4817e6a28a0a7445-primary.sqlite.bz2 from epel: [Errno 256] No more mirrors to try.\nhttp://ftp.linux.ncsu.edu/pub/epel/7/x86_64/repodata/d69bfed263e53ff827c9b958f51fbe34691275b99eda24ed4817e6a28a0a7445-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found\nhttp://mirror.oss.ou.edu/epel/7/x86_64/repodata/d69bfed263e53ff827c9b958f51fbe34691275b99eda24ed4817e6a28a0a7445-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found\nhttp://download-ib01.fedoraproject.org/pub/epel/7/x86_64/repodata/d69bfed263e53ff827c9b958f51fbe34691275b99eda24ed4817e6a28a0a7445-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found\nhttp://download-cc-rdu01.fedoraproject.org/pub/epel/7/x86_64/repodata/d69bfed263e53ff827c9b958f51fbe34691275b99eda24ed4817e6a28a0a7445-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found\nhttp://mirror.pnl.gov/epel/7/x86_64/repodata/d69bfed263e53ff827c9b958f51fbe34691275b99eda24ed4817e6a28a0a7445-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found\nhttp://mirrors.cat.pdx.edu/epel/7/x86_64/repodata/d69bfed263e53ff827c9b958f51fbe34691275b99eda24ed4817e6a28a0a7445-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found\n', '_ansible_no_log': False}}Traceback (most recent call last): File "/home/teuthworker/src/git.ceph.com_git_ceph-cm-ansible_master/callback_plugins/failure_log.py", line 44, in log_failure log.error(yaml.safe_dump(failure)) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/__init__.py", line 306, in safe_dump return dump_all([data], stream, Dumper=SafeDumper, **kwds) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/__init__.py", line 278, in dump_all dumper.represent(data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 27, in represent node = self.represent_data(data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 48, in represent_data node = self.yaml_representers[data_types[0]](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 207, in represent_dict return self.represent_mapping('tag:yaml.org,2002:map', data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 118, in represent_mapping node_value = self.represent_data(item_value) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 48, in represent_data node = self.yaml_representers[data_types[0]](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 207, in represent_dict return self.represent_mapping('tag:yaml.org,2002:map', data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 118, in represent_mapping node_value = self.represent_data(item_value) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 48, in represent_data node = self.yaml_representers[data_types[0]](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 207, in represent_dict return self.represent_mapping('tag:yaml.org,2002:map', data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 118, in represent_mapping node_value = self.represent_data(item_value) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 48, in represent_data node = self.yaml_representers[data_types[0]](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 207, in represent_dict return self.represent_mapping('tag:yaml.org,2002:map', data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 118, in represent_mapping node_value = self.represent_data(item_value) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 48, in represent_data node = self.yaml_representers[data_types[0]](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 199, in represent_list return self.represent_sequence('tag:yaml.org,2002:seq', data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 92, in represent_sequence node_item = self.represent_data(item) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 58, in represent_data node = self.yaml_representers[None](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 231, in represent_undefined raise RepresenterError("cannot represent an object", data)yaml.representer.RepresenterError: ('cannot represent an object', 'epel')Failure object was: {'smithi064.front.sepia.ceph.com': {'changed': False, 'results': [], 'rc': 1, 'invocation': {'module_args': {'lock_timeout': 0, 'update_cache': False, 'disable_excludes': None, 'exclude': [], 'allow_downgrade': False, 'disable_gpg_check': False, 'conf_file': None, 'use_backend': 'auto', 'state': 'latest', 'disablerepo': [], 'releasever': None, 'skip_broken': False, 'autoremove': False, 'download_dir': None, 'enable_plugin': [], 'installroot': '/', 'install_weak_deps': True, 'name': ['nagios-common', 'nrpe', 'nagios-plugins', 'nagios-plugins-load'], 'download_only': False, 'bugfix': False, 'list': None, 'install_repoquery': True, 'update_only': False, 'disable_plugin': [], 'enablerepo': ['epel'], 'security': False, 'validate_certs': True}}, 'msg': 'http://ftp.linux.ncsu.edu/pub/epel/7/x86_64/repodata/2a6554f8a72afcf0952727cbb8679d135408b0b3b630497cf9d321134cd85859-comps-Everything.x86_64.xml.gz: [Errno 14] HTTP Error 404 - Not Found\nTrying other mirror.\nTo address this issue please refer to the below wiki article \n\nhttps://wiki.centos.org/yum-errors\n\nIf above article doesn\'t help to resolve this issue please use https://bugs.centos.org/.\n\nhttp://ftp.linux.ncsu.edu/pub/epel/7/x86_64/repodata/9de8acce70ccee52b3b4adaf87c0b4233fe3d06a0bb0145aec34da6cf5659574-updateinfo.xml.bz2: [Errno 14] HTTP Error 404 - Not Found\nTrying other mirror.\nhttp://ftp.linux.ncsu.edu/pub/epel/7/x86_64/repodata/d69bfed263e53ff827c9b958f51fbe34691275b99eda24ed4817e6a28a0a7445-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found\nTrying other mirror.\nhttp://mirror.oss.ou.edu/epel/7/x86_64/repodata/2a6554f8a72afcf0952727cbb8679d135408b0b3b630497cf9d321134cd85859-comps-Everything.x86_64.xml.gz: [Errno 14] HTTP Error 404 - Not Found\nTrying other mirror.\nhttp://mirror.oss.ou.edu/epel/7/x86_64/repodata/9de8acce70ccee52b3b4adaf87c0b4233fe3d06a0bb0145aec34da6cf5659574-updateinfo.xml.bz2: [Errno 14] HTTP Error 404 - Not Found\nTrying other mirror.\nhttp://download-cc-rdu01.fedoraproject.org/pub/epel/7/x86_64/repodata/2a6554f8a72afcf0952727cbb8679d135408b0b3b630497cf9d321134cd85859-comps-Everything.x86_64.xml.gz: [Errno 14] HTTP Error 404 - Not Found\nTrying other mirror.\nhttp://mirror.oss.ou.edu/epel/7/x86_64/repodata/d69bfed263e53ff827c9b958f51fbe34691275b99eda24ed4817e6a28a0a7445-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found\nTrying other mirror.\nhttp://download-ib01.fedoraproject.org/pub/epel/7/x86_64/repodata/9de8acce70ccee52b3b4adaf87c0b4233fe3d06a0bb0145aec34da6cf5659574-updateinfo.xml.bz2: [Errno 14] HTTP Error 404 - Not Found\nTrying other mirror.\nhttp://mirrors.cat.pdx.edu/epel/7/x86_64/repodata/2a6554f8a72afcf0952727cbb8679d135408b0b3b630497cf9d321134cd85859-comps-Everything.x86_64.xml.gz: [Errno 14] HTTP Error 404 - Not Found\nTrying other mirror.\nhttp://mirror.pnl.gov/epel/7/x86_64/repodata/d69bfed263e53ff827c9b958f51fbe34691275b99eda24ed4817e6a28a0a7445-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found\nTrying other mirror.\nhttp://download-cc-rdu01.fedoraproject.org/pub/epel/7/x86_64/repodata/d69bfed263e53ff827c9b958f51fbe34691275b99eda24ed4817e6a28a0a7445-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found\nTrying other mirror.\nhttp://mirrors.cat.pdx.edu/epel/7/x86_64/repodata/9de8acce70ccee52b3b4adaf87c0b4233fe3d06a0bb0145aec34da6cf5659574-updateinfo.xml.bz2: [Errno 14] HTTP Error 404 - Not Found\nTrying other mirror.\nhttp://download-cc-rdu01.fedoraproject.org/pub/epel/7/x86_64/repodata/9de8acce70ccee52b3b4adaf87c0b4233fe3d06a0bb0145aec34da6cf5659574-updateinfo.xml.bz2: [Errno 14] HTTP Error 404 - Not Found\nTrying other mirror.\nhttp://download-ib01.fedoraproject.org/pub/epel/7/x86_64/repodata/d69bfed263e53ff827c9b958f51fbe34691275b99eda24ed4817e6a28a0a7445-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found\nTrying other mirror.\nhttp://mirror.pnl.gov/epel/7/x86_64/repodata/2a6554f8a72afcf0952727cbb8679d135408b0b3b630497cf9d321134cd85859-comps-Everything.x86_64.xml.gz: [Errno 14] HTTP Error 404 - Not Found\nTrying other mirror.\nhttp://download-ib01.fedoraproject.org/pub/epel/7/x86_64/repodata/2a6554f8a72afcf0952727cbb8679d135408b0b3b630497cf9d321134cd85859-comps-Everything.x86_64.xml.gz: [Errno 14] HTTP Error 404 - Not Found\nTrying other mirror.\nhttp://mirrors.cat.pdx.edu/epel/7/x86_64/repodata/d69bfed263e53ff827c9b958f51fbe34691275b99eda24ed4817e6a28a0a7445-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found\nTrying other mirror.\nhttp://mirror.pnl.gov/epel/7/x86_64/repodata/9de8acce70ccee52b3b4adaf87c0b4233fe3d06a0bb0145aec34da6cf5659574-updateinfo.xml.bz2: [Errno 14] HTTP Error 404 - Not Found\nTrying other mirror.\nhttp://ftp.linux.ncsu.edu/pub/epel/7/x86_64/repodata/d69bfed263e53ff827c9b958f51fbe34691275b99eda24ed4817e6a28a0a7445-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found\nTrying other mirror.\nhttp://mirror.oss.ou.edu/epel/7/x86_64/repodata/d69bfed263e53ff827c9b958f51fbe34691275b99eda24ed4817e6a28a0a7445-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found\nTrying other mirror.\nhttp://download-cc-rdu01.fedoraproject.org/pub/epel/7/x86_64/repodata/d69bfed263e53ff827c9b958f51fbe34691275b99eda24ed4817e6a28a0a7445-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found\nTrying other mirror.\nhttp://download-ib01.fedoraproject.org/pub/epel/7/x86_64/repodata/d69bfed263e53ff827c9b958f51fbe34691275b99eda24ed4817e6a28a0a7445-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found\nTrying other mirror.\nhttp://mirrors.cat.pdx.edu/epel/7/x86_64/repodata/d69bfed263e53ff827c9b958f51fbe34691275b99eda24ed4817e6a28a0a7445-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found\nTrying other mirror.\nhttp://mirror.pnl.gov/epel/7/x86_64/repodata/d69bfed263e53ff827c9b958f51fbe34691275b99eda24ed4817e6a28a0a7445-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found\nTrying other mirror.\nhttp://ftp.linux.ncsu.edu/pub/epel/7/x86_64/repodata/d69bfed263e53ff827c9b958f51fbe34691275b99eda24ed4817e6a28a0a7445-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found\nTrying other mirror.\nhttp://mirror.oss.ou.edu/epel/7/x86_64/repodata/d69bfed263e53ff827c9b958f51fbe34691275b99eda24ed4817e6a28a0a7445-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found\nTrying other mirror.\nhttp://mirror.pnl.gov/epel/7/x86_64/repodata/d69bfed263e53ff827c9b958f51fbe34691275b99eda24ed4817e6a28a0a7445-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found\nTrying other mirror.\nhttp://mirrors.cat.pdx.edu/epel/7/x86_64/repodata/d69bfed263e53ff827c9b958f51fbe34691275b99eda24ed4817e6a28a0a7445-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found\nTrying other mirror.\nhttp://download-ib01.fedoraproject.org/pub/epel/7/x86_64/repodata/d69bfed263e53ff827c9b958f51fbe34691275b99eda24ed4817e6a28a0a7445-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found\nTrying other mirror.\nhttp://download-cc-rdu01.fedoraproject.org/pub/epel/7/x86_64/repodata/d69bfed263e53ff827c9b958f51fbe34691275b99eda24ed4817e6a28a0a7445-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found\nTrying other mirror.\n\n\n One of the configured repositories failed (Extra Packages for Enterprise Linux),\n and yum doesn\'t have enough cached data to continue. At this point the only\n safe thing yum can do is fail. There are a few ways to work "fix" this:\n\n 1. Contact the upstream for the repository and get them to fix the problem.\n\n 2. Reconfigure the baseurl/etc. for the repository, to point to a working\n upstream. This is most often useful if you are using a newer\n distribution release than is supported by the repository (and the\n packages for the previous distribution release still work).\n\n 3. Run the command with the repository temporarily disabled\n yum --disablerepo=epel ...\n\n 4. Disable the repository permanently, so yum won\'t use it by default. Yum\n will then just ignore the repository until you permanently enable it\n again or use --enablerepo for temporary usage:\n\n yum-config-manager --disable epel\n or\n subscription-manager repos --disable=epel\n\n 5. Configure the failing repository to be skipped, if it is unavailable.\n Note that yum will try to contact the repo. when it runs most commands,\n so will have to try and fail each time (and thus. yum will be be much\n slower). If it is a very temporary problem though, this is often a nice\n compromise:\n\n yum-config-manager --save --setopt=epel.skip_if_unavailable=true\n\nfailure: repodata/d69bfed263e53ff827c9b958f51fbe34691275b99eda24ed4817e6a28a0a7445-primary.sqlite.bz2 from epel: [Errno 256] No more mirrors to try.\nhttp://ftp.linux.ncsu.edu/pub/epel/7/x86_64/repodata/d69bfed263e53ff827c9b958f51fbe34691275b99eda24ed4817e6a28a0a7445-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found\nhttp://mirror.oss.ou.edu/epel/7/x86_64/repodata/d69bfed263e53ff827c9b958f51fbe34691275b99eda24ed4817e6a28a0a7445-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found\nhttp://mirror.pnl.gov/epel/7/x86_64/repodata/d69bfed263e53ff827c9b958f51fbe34691275b99eda24ed4817e6a28a0a7445-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found\nhttp://mirrors.cat.pdx.edu/epel/7/x86_64/repodata/d69bfed263e53ff827c9b958f51fbe34691275b99eda24ed4817e6a28a0a7445-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found\nhttp://download-ib01.fedoraproject.org/pub/epel/7/x86_64/repodata/d69bfed263e53ff827c9b958f51fbe34691275b99eda24ed4817e6a28a0a7445-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found\nhttp://download-cc-rdu01.fedoraproject.org/pub/epel/7/x86_64/repodata/d69bfed263e53ff827c9b958f51fbe34691275b99eda24ed4817e6a28a0a7445-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found\n', '_ansible_no_log': False}}Traceback (most recent call last): File "/home/teuthworker/src/git.ceph.com_git_ceph-cm-ansible_master/callback_plugins/failure_log.py", line 44, in log_failure log.error(yaml.safe_dump(failure)) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/__init__.py", line 306, in safe_dump return dump_all([data], stream, Dumper=SafeDumper, **kwds) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/__init__.py", line 278, in dump_all dumper.represent(data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 27, in represent node = self.represent_data(data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 48, in represent_data node = self.yaml_representers[data_types[0]](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 207, in represent_dict return self.represent_mapping('tag:yaml.org,2002:map', data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 118, in represent_mapping node_value = self.represent_data(item_value) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 48, in represent_data node = self.yaml_representers[data_types[0]](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 207, in represent_dict return self.represent_mapping('tag:yaml.org,2002:map', data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 118, in represent_mapping node_value = self.represent_data(item_value) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 48, in represent_data node = self.yaml_representers[data_types[0]](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 207, in represent_dict return self.represent_mapping('tag:yaml.org,2002:map', data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 118, in represent_mapping node_value = self.represent_data(item_value) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 48, in represent_data node = self.yaml_representers[data_types[0]](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 207, in represent_dict return self.represent_mapping('tag:yaml.org,2002:map', data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 118, in represent_mapping node_value = self.represent_data(item_value) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 48, in represent_data node = self.yaml_representers[data_types[0]](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 199, in represent_list return self.represent_sequence('tag:yaml.org,2002:seq', data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 92, in represent_sequence node_item = self.represent_data(item) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 58, in represent_data node = self.yaml_representers[None](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 231, in represent_undefined raise RepresenterError("cannot represent an object", data)yaml.representer.RepresenterError: ('cannot represent an object', 'epel') |
||||||||||||||
pass | 5079801 | 2020-05-21 23:18:30 | 2020-05-25 05:50:31 | 2020-05-25 06:12:30 | 0:21:59 | 0:13:05 | 0:08:54 | smithi | master | rhel | 8.1 | rados/mgr/{clusters/{2-node-mgr.yaml} debug/mgr.yaml objectstore/bluestore-comp-zlib.yaml supported-random-distro$/{rhel_8.yaml} tasks/crash.yaml} | 2 | |
pass | 5079802 | 2020-05-21 23:18:31 | 2020-05-25 05:50:31 | 2020-05-25 06:30:31 | 0:40:00 | 0:33:02 | 0:06:58 | smithi | master | centos | 8.1 | rados/singleton-bluestore/{all/cephtool.yaml msgr-failures/many.yaml msgr/async-v1only.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{centos_8.yaml}} | 1 | |
pass | 5079803 | 2020-05-21 23:18:32 | 2020-05-25 05:50:40 | 2020-05-25 07:02:42 | 1:12:02 | 1:04:42 | 0:07:20 | smithi | master | rhel | 8.1 | rados/dashboard/{clusters/{2-node-mgr.yaml} debug/mgr.yaml objectstore/bluestore-bitmap.yaml supported-random-distro$/{rhel_8.yaml} tasks/dashboard.yaml} | 2 | |
pass | 5079804 | 2020-05-21 23:18:33 | 2020-05-25 05:50:52 | 2020-05-25 06:10:52 | 0:20:00 | 0:10:10 | 0:09:50 | smithi | master | ubuntu | 18.04 | rados/objectstore/{backends/alloc-hint.yaml supported-random-distro$/{ubuntu_latest.yaml}} | 1 | |
pass | 5079805 | 2020-05-21 23:18:33 | 2020-05-25 05:52:31 | 2020-05-25 06:12:31 | 0:20:00 | 0:10:31 | 0:09:29 | smithi | master | ubuntu | 18.04 | rados/rest/{mgr-restful.yaml supported-random-distro$/{ubuntu_latest.yaml}} | 1 | |
pass | 5079806 | 2020-05-21 23:18:34 | 2020-05-25 05:52:31 | 2020-05-25 06:14:31 | 0:22:00 | 0:14:50 | 0:07:10 | smithi | master | centos | 8.1 | rados/singleton-nomsgr/{all/admin_socket_output.yaml rados.yaml supported-random-distro$/{centos_8.yaml}} | 1 | |
pass | 5079807 | 2020-05-21 23:18:35 | 2020-05-25 05:52:37 | 2020-05-25 06:14:37 | 0:22:00 | 0:15:06 | 0:06:54 | smithi | master | rhel | 8.1 | rados/standalone/{supported-random-distro$/{rhel_8.yaml} workloads/crush.yaml} | 1 | |
pass | 5079808 | 2020-05-21 23:18:36 | 2020-05-25 05:52:41 | 2020-05-25 10:10:47 | 4:18:06 | 3:40:15 | 0:37:51 | smithi | master | ubuntu | 18.04 | rados/upgrade/nautilus-x-singleton/{0-cluster/{openstack.yaml start.yaml} 1-install/nautilus.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-workload/{rbd-cls.yaml rbd-import-export.yaml readwrite.yaml snaps-few-objects.yaml} 5-workload/{radosbench.yaml rbd_api.yaml} 6-finish-upgrade.yaml 7-octopus.yaml 8-workload/{rbd-python.yaml snaps-many-objects.yaml} bluestore-bitmap.yaml thrashosds-health.yaml ubuntu_latest.yaml} | 4 | |
pass | 5079809 | 2020-05-21 23:18:37 | 2020-05-25 05:54:35 | 2020-05-25 06:20:35 | 0:26:00 | 0:20:35 | 0:05:25 | smithi | master | centos | 8.1 | rados/valgrind-leaks/{1-start.yaml 2-inject-leak/mon.yaml centos_latest.yaml} | 1 | |
dead | 5079810 | 2020-05-21 23:18:38 | 2020-05-25 05:54:35 | 2020-05-25 06:08:35 | 0:14:00 | 0:02:54 | 0:11:06 | smithi | master | ubuntu | 18.04 | rados/cephadm/orchestrator_cli/{2-node-mgr.yaml orchestrator_cli.yaml supported-random-distro$/{ubuntu_latest.yaml}} | 2 | |
Failure Reason:
Command failed with status 2: 'ansible-playbook -v --extra-vars \'{"ansible_ssh_user": "ubuntu"}\' -i /etc/ansible/hosts --limit smithi191.front.sepia.ceph.com,smithi168.front.sepia.ceph.com /home/teuthworker/src/git.ceph.com_git_ceph-cm-ansible_master/cephlab.yml' |
||||||||||||||
pass | 5079811 | 2020-05-21 23:18:39 | 2020-05-25 05:54:37 | 2020-05-25 06:10:36 | 0:15:59 | 0:08:49 | 0:07:10 | smithi | master | centos | 8.1 | rados/singleton/{all/divergent_priors2.yaml msgr-failures/many.yaml msgr/async.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{centos_8.yaml}} | 1 | |
dead | 5079812 | 2020-05-21 23:18:40 | 2020-05-25 05:56:36 | 2020-05-25 06:10:35 | 0:13:59 | 0:02:39 | 0:11:20 | smithi | master | centos | 7.6 | rados/cephadm/smoke/{distro/centos_7.yaml fixed-2.yaml start.yaml} | 2 | |
Failure Reason:
o be skipped, if it is unavailable.\n Note that yum will try to contact the repo. when it runs most commands,\n so will have to try and fail each time (and thus. yum will be be much\n slower). If it is a very temporary problem though, this is often a nice\n compromise:\n\n yum-config-manager --save --setopt=epel.skip_if_unavailable=true\n\nfailure: repodata/d69bfed263e53ff827c9b958f51fbe34691275b99eda24ed4817e6a28a0a7445-primary.sqlite.bz2 from epel: [Errno 256] No more mirrors to try.\nhttp://ftp.linux.ncsu.edu/pub/epel/7/x86_64/repodata/d69bfed263e53ff827c9b958f51fbe34691275b99eda24ed4817e6a28a0a7445-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found\nhttp://mirror.oss.ou.edu/epel/7/x86_64/repodata/d69bfed263e53ff827c9b958f51fbe34691275b99eda24ed4817e6a28a0a7445-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found\nhttp://mirrors.cat.pdx.edu/epel/7/x86_64/repodata/d69bfed263e53ff827c9b958f51fbe34691275b99eda24ed4817e6a28a0a7445-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found\nhttp://download-ib01.fedoraproject.org/pub/epel/7/x86_64/repodata/d69bfed263e53ff827c9b958f51fbe34691275b99eda24ed4817e6a28a0a7445-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found\nhttp://download-cc-rdu01.fedoraproject.org/pub/epel/7/x86_64/repodata/d69bfed263e53ff827c9b958f51fbe34691275b99eda24ed4817e6a28a0a7445-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found\nhttp://mirror.pnl.gov/epel/7/x86_64/repodata/d69bfed263e53ff827c9b958f51fbe34691275b99eda24ed4817e6a28a0a7445-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found\n', '_ansible_no_log': False}}Traceback (most recent call last): File "/home/teuthworker/src/git.ceph.com_git_ceph-cm-ansible_master/callback_plugins/failure_log.py", line 44, in log_failure log.error(yaml.safe_dump(failure)) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/__init__.py", line 306, in safe_dump return dump_all([data], stream, Dumper=SafeDumper, **kwds) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/__init__.py", line 278, in dump_all dumper.represent(data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 27, in represent node = self.represent_data(data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 48, in represent_data node = self.yaml_representers[data_types[0]](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 207, in represent_dict return self.represent_mapping('tag:yaml.org,2002:map', data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 118, in represent_mapping node_value = self.represent_data(item_value) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 48, in represent_data node = self.yaml_representers[data_types[0]](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 207, in represent_dict return self.represent_mapping('tag:yaml.org,2002:map', data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 118, in represent_mapping node_value = self.represent_data(item_value) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 48, in represent_data node = self.yaml_representers[data_types[0]](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 207, in represent_dict return self.represent_mapping('tag:yaml.org,2002:map', data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 118, in represent_mapping node_value = self.represent_data(item_value) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 48, in represent_data node = self.yaml_representers[data_types[0]](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 207, in represent_dict return self.represent_mapping('tag:yaml.org,2002:map', data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 118, in represent_mapping node_value = self.represent_data(item_value) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 48, in represent_data node = self.yaml_representers[data_types[0]](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 199, in represent_list return self.represent_sequence('tag:yaml.org,2002:seq', data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 92, in represent_sequence node_item = self.represent_data(item) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 58, in represent_data node = self.yaml_representers[None](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 231, in represent_undefined raise RepresenterError("cannot represent an object", data)yaml.representer.RepresenterError: ('cannot represent an object', 'epel')Failure object was: {'smithi190.front.sepia.ceph.com': {'changed': False, 'results': [], 'rc': 1, 'invocation': {'module_args': {'lock_timeout': 0, 'update_cache': False, 'disable_excludes': None, 'exclude': [], 'allow_downgrade': False, 'disable_gpg_check': False, 'conf_file': None, 'use_backend': 'auto', 'state': 'latest', 'disablerepo': [], 'releasever': None, 'skip_broken': False, 'autoremove': False, 'download_dir': None, 'enable_plugin': [], 'installroot': '/', 'install_weak_deps': True, 'name': ['nagios-common', 'nrpe', 'nagios-plugins', 'nagios-plugins-load'], 'download_only': False, 'bugfix': False, 'list': None, 'install_repoquery': True, 'update_only': False, 'disable_plugin': [], 'enablerepo': ['epel'], 'security': False, 'validate_certs': True}}, 'msg': 'http://ftp.linux.ncsu.edu/pub/epel/7/x86_64/repodata/2a6554f8a72afcf0952727cbb8679d135408b0b3b630497cf9d321134cd85859-comps-Everything.x86_64.xml.gz: [Errno 14] HTTP Error 404 - Not Found\nTrying other mirror.\nTo address this issue please refer to the below wiki article \n\nhttps://wiki.centos.org/yum-errors\n\nIf above article doesn\'t help to resolve this issue please use https://bugs.centos.org/.\n\nhttp://ftp.linux.ncsu.edu/pub/epel/7/x86_64/repodata/9de8acce70ccee52b3b4adaf87c0b4233fe3d06a0bb0145aec34da6cf5659574-updateinfo.xml.bz2: [Errno 14] HTTP Error 404 - Not Found\nTrying other mirror.\nhttp://ftp.linux.ncsu.edu/pub/epel/7/x86_64/repodata/d69bfed263e53ff827c9b958f51fbe34691275b99eda24ed4817e6a28a0a7445-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found\nTrying other mirror.\nhttp://mirror.oss.ou.edu/epel/7/x86_64/repodata/2a6554f8a72afcf0952727cbb8679d135408b0b3b630497cf9d321134cd85859-comps-Everything.x86_64.xml.gz: [Errno 14] HTTP Error 404 - Not Found\nTrying other mirror.\nhttp://mirror.oss.ou.edu/epel/7/x86_64/repodata/9de8acce70ccee52b3b4adaf87c0b4233fe3d06a0bb0145aec34da6cf5659574-updateinfo.xml.bz2: [Errno 14] HTTP Error 404 - Not Found\nTrying other mirror.\nhttp://download-cc-rdu01.fedoraproject.org/pub/epel/7/x86_64/repodata/2a6554f8a72afcf0952727cbb8679d135408b0b3b630497cf9d321134cd85859-comps-Everything.x86_64.xml.gz: [Errno 14] HTTP Error 404 - Not Found\nTrying other mirror.\nhttp://mirror.oss.ou.edu/epel/7/x86_64/repodata/d69bfed263e53ff827c9b958f51fbe34691275b99eda24ed4817e6a28a0a7445-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found\nTrying other mirror.\nhttp://download-ib01.fedoraproject.org/pub/epel/7/x86_64/repodata/9de8acce70ccee52b3b4adaf87c0b4233fe3d06a0bb0145aec34da6cf5659574-updateinfo.xml.bz2: [Errno 14] HTTP Error 404 - Not Found\nTrying other mirror.\nhttp://mirrors.cat.pdx.edu/epel/7/x86_64/repodata/2a6554f8a72afcf0952727cbb8679d135408b0b3b630497cf9d321134cd85859-comps-Everything.x86_64.xml.gz: [Errno 14] HTTP Error 404 - Not Found\nTrying other mirror.\nhttp://mirror.pnl.gov/epel/7/x86_64/repodata/d69bfed263e53ff827c9b958f51fbe34691275b99eda24ed4817e6a28a0a7445-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found\nTrying other mirror.\nhttp://download-cc-rdu01.fedoraproject.org/pub/epel/7/x86_64/repodata/d69bfed263e53ff827c9b958f51fbe34691275b99eda24ed4817e6a28a0a7445-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found\nTrying other mirror.\nhttp://mirrors.cat.pdx.edu/epel/7/x86_64/repodata/9de8acce70ccee52b3b4adaf87c0b4233fe3d06a0bb0145aec34da6cf5659574-updateinfo.xml.bz2: [Errno 14] HTTP Error 404 - Not Found\nTrying other mirror.\nhttp://download-cc-rdu01.fedoraproject.org/pub/epel/7/x86_64/repodata/9de8acce70ccee52b3b4adaf87c0b4233fe3d06a0bb0145aec34da6cf5659574-updateinfo.xml.bz2: [Errno 14] HTTP Error 404 - Not Found\nTrying other mirror.\nhttp://download-ib01.fedoraproject.org/pub/epel/7/x86_64/repodata/d69bfed263e53ff827c9b958f51fbe34691275b99eda24ed4817e6a28a0a7445-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found\nTrying other mirror.\nhttp://mirror.pnl.gov/epel/7/x86_64/repodata/2a6554f8a72afcf0952727cbb8679d135408b0b3b630497cf9d321134cd85859-comps-Everything.x86_64.xml.gz: [Errno 14] HTTP Error 404 - Not Found\nTrying other mirror.\nhttp://download-ib01.fedoraproject.org/pub/epel/7/x86_64/repodata/2a6554f8a72afcf0952727cbb8679d135408b0b3b630497cf9d321134cd85859-comps-Everything.x86_64.xml.gz: [Errno 14] HTTP Error 404 - Not Found\nTrying other mirror.\nhttp://mirrors.cat.pdx.edu/epel/7/x86_64/repodata/d69bfed263e53ff827c9b958f51fbe34691275b99eda24ed4817e6a28a0a7445-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found\nTrying other mirror.\nhttp://mirror.pnl.gov/epel/7/x86_64/repodata/9de8acce70ccee52b3b4adaf87c0b4233fe3d06a0bb0145aec34da6cf5659574-updateinfo.xml.bz2: [Errno 14] HTTP Error 404 - Not Found\nTrying other mirror.\nhttp://ftp.linux.ncsu.edu/pub/epel/7/x86_64/repodata/d69bfed263e53ff827c9b958f51fbe34691275b99eda24ed4817e6a28a0a7445-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found\nTrying other mirror.\nhttp://mirror.oss.ou.edu/epel/7/x86_64/repodata/d69bfed263e53ff827c9b958f51fbe34691275b99eda24ed4817e6a28a0a7445-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found\nTrying other mirror.\nhttp://download-cc-rdu01.fedoraproject.org/pub/epel/7/x86_64/repodata/d69bfed263e53ff827c9b958f51fbe34691275b99eda24ed4817e6a28a0a7445-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found\nTrying other mirror.\nhttp://download-ib01.fedoraproject.org/pub/epel/7/x86_64/repodata/d69bfed263e53ff827c9b958f51fbe34691275b99eda24ed4817e6a28a0a7445-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found\nTrying other mirror.\nhttp://mirrors.cat.pdx.edu/epel/7/x86_64/repodata/d69bfed263e53ff827c9b958f51fbe34691275b99eda24ed4817e6a28a0a7445-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found\nTrying other mirror.\nhttp://mirror.pnl.gov/epel/7/x86_64/repodata/d69bfed263e53ff827c9b958f51fbe34691275b99eda24ed4817e6a28a0a7445-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found\nTrying other mirror.\nhttp://ftp.linux.ncsu.edu/pub/epel/7/x86_64/repodata/d69bfed263e53ff827c9b958f51fbe34691275b99eda24ed4817e6a28a0a7445-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found\nTrying other mirror.\nhttp://mirror.oss.ou.edu/epel/7/x86_64/repodata/d69bfed263e53ff827c9b958f51fbe34691275b99eda24ed4817e6a28a0a7445-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found\nTrying other mirror.\nhttp://mirror.pnl.gov/epel/7/x86_64/repodata/d69bfed263e53ff827c9b958f51fbe34691275b99eda24ed4817e6a28a0a7445-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found\nTrying other mirror.\nhttp://mirrors.cat.pdx.edu/epel/7/x86_64/repodata/d69bfed263e53ff827c9b958f51fbe34691275b99eda24ed4817e6a28a0a7445-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found\nTrying other mirror.\nhttp://download-ib01.fedoraproject.org/pub/epel/7/x86_64/repodata/d69bfed263e53ff827c9b958f51fbe34691275b99eda24ed4817e6a28a0a7445-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found\nTrying other mirror.\nhttp://download-cc-rdu01.fedoraproject.org/pub/epel/7/x86_64/repodata/d69bfed263e53ff827c9b958f51fbe34691275b99eda24ed4817e6a28a0a7445-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found\nTrying other mirror.\n\n\n One of the configured repositories failed (Extra Packages for Enterprise Linux),\n and yum doesn\'t have enough cached data to continue. At this point the only\n safe thing yum can do is fail. There are a few ways to work "fix" this:\n\n 1. Contact the upstream for the repository and get them to fix the problem.\n\n 2. Reconfigure the baseurl/etc. for the repository, to point to a working\n upstream. This is most often useful if you are using a newer\n distribution release than is supported by the repository (and the\n packages for the previous distribution release still work).\n\n 3. Run the command with the repository temporarily disabled\n yum --disablerepo=epel ...\n\n 4. Disable the repository permanently, so yum won\'t use it by default. Yum\n will then just ignore the repository until you permanently enable it\n again or use --enablerepo for temporary usage:\n\n yum-config-manager --disable epel\n or\n subscription-manager repos --disable=epel\n\n 5. Configure the failing repository to be skipped, if it is unavailable.\n Note that yum will try to contact the repo. when it runs most commands,\n so will have to try and fail each time (and thus. yum will be be much\n slower). If it is a very temporary problem though, this is often a nice\n compromise:\n\n yum-config-manager --save --setopt=epel.skip_if_unavailable=true\n\nfailure: repodata/d69bfed263e53ff827c9b958f51fbe34691275b99eda24ed4817e6a28a0a7445-primary.sqlite.bz2 from epel: [Errno 256] No more mirrors to try.\nhttp://ftp.linux.ncsu.edu/pub/epel/7/x86_64/repodata/d69bfed263e53ff827c9b958f51fbe34691275b99eda24ed4817e6a28a0a7445-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found\nhttp://mirror.oss.ou.edu/epel/7/x86_64/repodata/d69bfed263e53ff827c9b958f51fbe34691275b99eda24ed4817e6a28a0a7445-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found\nhttp://mirror.pnl.gov/epel/7/x86_64/repodata/d69bfed263e53ff827c9b958f51fbe34691275b99eda24ed4817e6a28a0a7445-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found\nhttp://mirrors.cat.pdx.edu/epel/7/x86_64/repodata/d69bfed263e53ff827c9b958f51fbe34691275b99eda24ed4817e6a28a0a7445-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found\nhttp://download-ib01.fedoraproject.org/pub/epel/7/x86_64/repodata/d69bfed263e53ff827c9b958f51fbe34691275b99eda24ed4817e6a28a0a7445-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found\nhttp://download-cc-rdu01.fedoraproject.org/pub/epel/7/x86_64/repodata/d69bfed263e53ff827c9b958f51fbe34691275b99eda24ed4817e6a28a0a7445-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found\n', '_ansible_no_log': False}}Traceback (most recent call last): File "/home/teuthworker/src/git.ceph.com_git_ceph-cm-ansible_master/callback_plugins/failure_log.py", line 44, in log_failure log.error(yaml.safe_dump(failure)) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/__init__.py", line 306, in safe_dump return dump_all([data], stream, Dumper=SafeDumper, **kwds) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/__init__.py", line 278, in dump_all dumper.represent(data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 27, in represent node = self.represent_data(data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 48, in represent_data node = self.yaml_representers[data_types[0]](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 207, in represent_dict return self.represent_mapping('tag:yaml.org,2002:map', data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 118, in represent_mapping node_value = self.represent_data(item_value) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 48, in represent_data node = self.yaml_representers[data_types[0]](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 207, in represent_dict return self.represent_mapping('tag:yaml.org,2002:map', data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 118, in represent_mapping node_value = self.represent_data(item_value) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 48, in represent_data node = self.yaml_representers[data_types[0]](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 207, in represent_dict return self.represent_mapping('tag:yaml.org,2002:map', data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 118, in represent_mapping node_value = self.represent_data(item_value) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 48, in represent_data node = self.yaml_representers[data_types[0]](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 207, in represent_dict return self.represent_mapping('tag:yaml.org,2002:map', data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 118, in represent_mapping node_value = self.represent_data(item_value) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 48, in represent_data node = self.yaml_representers[data_types[0]](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 199, in represent_list return self.represent_sequence('tag:yaml.org,2002:seq', data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 92, in represent_sequence node_item = self.represent_data(item) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 58, in represent_data node = self.yaml_representers[None](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 231, in represent_undefined raise RepresenterError("cannot represent an object", data)yaml.representer.RepresenterError: ('cannot represent an object', 'epel') |
||||||||||||||
pass | 5079813 | 2020-05-21 23:18:41 | 2020-05-25 05:56:36 | 2020-05-25 06:32:36 | 0:36:00 | 0:27:56 | 0:08:04 | smithi | master | centos | 8.1 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/on.yaml msgr-failures/osd-delay.yaml msgr/async.yaml objectstore/bluestore-hybrid.yaml rados.yaml supported-random-distro$/{centos_8.yaml} thrashers/pggrow.yaml thrashosds-health.yaml workloads/pool-snaps-few-objects.yaml} | 2 | |
pass | 5079814 | 2020-05-21 23:18:42 | 2020-05-25 05:56:52 | 2020-05-25 06:16:52 | 0:20:00 | 0:10:37 | 0:09:23 | smithi | master | ubuntu | 18.04 | rados/perf/{ceph.yaml objectstore/bluestore-bitmap.yaml openstack.yaml settings/optimized.yaml ubuntu_latest.yaml workloads/radosbench_4K_seq_read.yaml} | 1 | |
pass | 5079815 | 2020-05-21 23:18:42 | 2020-05-25 05:58:54 | 2020-05-25 06:16:53 | 0:17:59 | 0:12:09 | 0:05:50 | smithi | master | rhel | 8.1 | rados/singleton/{all/dump-stuck.yaml msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{rhel_8.yaml}} | 1 | |
dead | 5079816 | 2020-05-21 23:18:43 | 2020-05-25 06:00:37 | 2020-05-25 06:14:36 | 0:13:59 | 0:02:34 | 0:11:25 | smithi | master | centos | 7.6 | rados/cephadm/smoke-roleless/{distro/centos_7.yaml start.yaml} | 2 | |
Failure Reason:
o be skipped, if it is unavailable.\n Note that yum will try to contact the repo. when it runs most commands,\n so will have to try and fail each time (and thus. yum will be be much\n slower). If it is a very temporary problem though, this is often a nice\n compromise:\n\n yum-config-manager --save --setopt=epel.skip_if_unavailable=true\n\nfailure: repodata/d69bfed263e53ff827c9b958f51fbe34691275b99eda24ed4817e6a28a0a7445-primary.sqlite.bz2 from epel: [Errno 256] No more mirrors to try.\nhttp://ftp.linux.ncsu.edu/pub/epel/7/x86_64/repodata/d69bfed263e53ff827c9b958f51fbe34691275b99eda24ed4817e6a28a0a7445-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found\nhttp://mirror.oss.ou.edu/epel/7/x86_64/repodata/d69bfed263e53ff827c9b958f51fbe34691275b99eda24ed4817e6a28a0a7445-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found\nhttp://mirrors.cat.pdx.edu/epel/7/x86_64/repodata/d69bfed263e53ff827c9b958f51fbe34691275b99eda24ed4817e6a28a0a7445-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found\nhttp://download-ib01.fedoraproject.org/pub/epel/7/x86_64/repodata/d69bfed263e53ff827c9b958f51fbe34691275b99eda24ed4817e6a28a0a7445-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found\nhttp://download-cc-rdu01.fedoraproject.org/pub/epel/7/x86_64/repodata/d69bfed263e53ff827c9b958f51fbe34691275b99eda24ed4817e6a28a0a7445-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found\nhttp://mirror.pnl.gov/epel/7/x86_64/repodata/d69bfed263e53ff827c9b958f51fbe34691275b99eda24ed4817e6a28a0a7445-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found\n', '_ansible_no_log': False}}Traceback (most recent call last): File "/home/teuthworker/src/git.ceph.com_git_ceph-cm-ansible_master/callback_plugins/failure_log.py", line 44, in log_failure log.error(yaml.safe_dump(failure)) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/__init__.py", line 306, in safe_dump return dump_all([data], stream, Dumper=SafeDumper, **kwds) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/__init__.py", line 278, in dump_all dumper.represent(data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 27, in represent node = self.represent_data(data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 48, in represent_data node = self.yaml_representers[data_types[0]](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 207, in represent_dict return self.represent_mapping('tag:yaml.org,2002:map', data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 118, in represent_mapping node_value = self.represent_data(item_value) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 48, in represent_data node = self.yaml_representers[data_types[0]](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 207, in represent_dict return self.represent_mapping('tag:yaml.org,2002:map', data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 118, in represent_mapping node_value = self.represent_data(item_value) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 48, in represent_data node = self.yaml_representers[data_types[0]](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 207, in represent_dict return self.represent_mapping('tag:yaml.org,2002:map', data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 118, in represent_mapping node_value = self.represent_data(item_value) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 48, in represent_data node = self.yaml_representers[data_types[0]](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 207, in represent_dict return self.represent_mapping('tag:yaml.org,2002:map', data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 118, in represent_mapping node_value = self.represent_data(item_value) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 48, in represent_data node = self.yaml_representers[data_types[0]](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 199, in represent_list return self.represent_sequence('tag:yaml.org,2002:seq', data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 92, in represent_sequence node_item = self.represent_data(item) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 58, in represent_data node = self.yaml_representers[None](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 231, in represent_undefined raise RepresenterError("cannot represent an object", data)yaml.representer.RepresenterError: ('cannot represent an object', 'epel')Failure object was: {'smithi109.front.sepia.ceph.com': {'changed': False, 'results': [], 'rc': 1, 'invocation': {'module_args': {'lock_timeout': 0, 'update_cache': False, 'disable_excludes': None, 'exclude': [], 'allow_downgrade': False, 'disable_gpg_check': False, 'conf_file': None, 'use_backend': 'auto', 'state': 'latest', 'disablerepo': [], 'releasever': None, 'skip_broken': False, 'autoremove': False, 'download_dir': None, 'enable_plugin': [], 'installroot': '/', 'install_weak_deps': True, 'name': ['nagios-common', 'nrpe', 'nagios-plugins', 'nagios-plugins-load'], 'download_only': False, 'bugfix': False, 'list': None, 'install_repoquery': True, 'update_only': False, 'disable_plugin': [], 'enablerepo': ['epel'], 'security': False, 'validate_certs': True}}, 'msg': 'http://ftp.linux.ncsu.edu/pub/epel/7/x86_64/repodata/2a6554f8a72afcf0952727cbb8679d135408b0b3b630497cf9d321134cd85859-comps-Everything.x86_64.xml.gz: [Errno 14] HTTP Error 404 - Not Found\nTrying other mirror.\nTo address this issue please refer to the below wiki article \n\nhttps://wiki.centos.org/yum-errors\n\nIf above article doesn\'t help to resolve this issue please use https://bugs.centos.org/.\n\nhttp://ftp.linux.ncsu.edu/pub/epel/7/x86_64/repodata/9de8acce70ccee52b3b4adaf87c0b4233fe3d06a0bb0145aec34da6cf5659574-updateinfo.xml.bz2: [Errno 14] HTTP Error 404 - Not Found\nTrying other mirror.\nhttp://ftp.linux.ncsu.edu/pub/epel/7/x86_64/repodata/d69bfed263e53ff827c9b958f51fbe34691275b99eda24ed4817e6a28a0a7445-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found\nTrying other mirror.\nhttp://mirror.oss.ou.edu/epel/7/x86_64/repodata/2a6554f8a72afcf0952727cbb8679d135408b0b3b630497cf9d321134cd85859-comps-Everything.x86_64.xml.gz: [Errno 14] HTTP Error 404 - Not Found\nTrying other mirror.\nhttp://mirror.oss.ou.edu/epel/7/x86_64/repodata/9de8acce70ccee52b3b4adaf87c0b4233fe3d06a0bb0145aec34da6cf5659574-updateinfo.xml.bz2: [Errno 14] HTTP Error 404 - Not Found\nTrying other mirror.\nhttp://download-cc-rdu01.fedoraproject.org/pub/epel/7/x86_64/repodata/2a6554f8a72afcf0952727cbb8679d135408b0b3b630497cf9d321134cd85859-comps-Everything.x86_64.xml.gz: [Errno 14] HTTP Error 404 - Not Found\nTrying other mirror.\nhttp://mirror.oss.ou.edu/epel/7/x86_64/repodata/d69bfed263e53ff827c9b958f51fbe34691275b99eda24ed4817e6a28a0a7445-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found\nTrying other mirror.\nhttp://download-ib01.fedoraproject.org/pub/epel/7/x86_64/repodata/9de8acce70ccee52b3b4adaf87c0b4233fe3d06a0bb0145aec34da6cf5659574-updateinfo.xml.bz2: [Errno 14] HTTP Error 404 - Not Found\nTrying other mirror.\nhttp://mirrors.cat.pdx.edu/epel/7/x86_64/repodata/2a6554f8a72afcf0952727cbb8679d135408b0b3b630497cf9d321134cd85859-comps-Everything.x86_64.xml.gz: [Errno 14] HTTP Error 404 - Not Found\nTrying other mirror.\nhttp://mirror.pnl.gov/epel/7/x86_64/repodata/d69bfed263e53ff827c9b958f51fbe34691275b99eda24ed4817e6a28a0a7445-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found\nTrying other mirror.\nhttp://download-cc-rdu01.fedoraproject.org/pub/epel/7/x86_64/repodata/d69bfed263e53ff827c9b958f51fbe34691275b99eda24ed4817e6a28a0a7445-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found\nTrying other mirror.\nhttp://mirrors.cat.pdx.edu/epel/7/x86_64/repodata/9de8acce70ccee52b3b4adaf87c0b4233fe3d06a0bb0145aec34da6cf5659574-updateinfo.xml.bz2: [Errno 14] HTTP Error 404 - Not Found\nTrying other mirror.\nhttp://download-cc-rdu01.fedoraproject.org/pub/epel/7/x86_64/repodata/9de8acce70ccee52b3b4adaf87c0b4233fe3d06a0bb0145aec34da6cf5659574-updateinfo.xml.bz2: [Errno 14] HTTP Error 404 - Not Found\nTrying other mirror.\nhttp://download-ib01.fedoraproject.org/pub/epel/7/x86_64/repodata/d69bfed263e53ff827c9b958f51fbe34691275b99eda24ed4817e6a28a0a7445-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found\nTrying other mirror.\nhttp://mirror.pnl.gov/epel/7/x86_64/repodata/2a6554f8a72afcf0952727cbb8679d135408b0b3b630497cf9d321134cd85859-comps-Everything.x86_64.xml.gz: [Errno 14] HTTP Error 404 - Not Found\nTrying other mirror.\nhttp://download-ib01.fedoraproject.org/pub/epel/7/x86_64/repodata/2a6554f8a72afcf0952727cbb8679d135408b0b3b630497cf9d321134cd85859-comps-Everything.x86_64.xml.gz: [Errno 14] HTTP Error 404 - Not Found\nTrying other mirror.\nhttp://mirrors.cat.pdx.edu/epel/7/x86_64/repodata/d69bfed263e53ff827c9b958f51fbe34691275b99eda24ed4817e6a28a0a7445-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found\nTrying other mirror.\nhttp://mirror.pnl.gov/epel/7/x86_64/repodata/9de8acce70ccee52b3b4adaf87c0b4233fe3d06a0bb0145aec34da6cf5659574-updateinfo.xml.bz2: [Errno 14] HTTP Error 404 - Not Found\nTrying other mirror.\nhttp://ftp.linux.ncsu.edu/pub/epel/7/x86_64/repodata/d69bfed263e53ff827c9b958f51fbe34691275b99eda24ed4817e6a28a0a7445-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found\nTrying other mirror.\nhttp://mirror.oss.ou.edu/epel/7/x86_64/repodata/d69bfed263e53ff827c9b958f51fbe34691275b99eda24ed4817e6a28a0a7445-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found\nTrying other mirror.\nhttp://download-cc-rdu01.fedoraproject.org/pub/epel/7/x86_64/repodata/d69bfed263e53ff827c9b958f51fbe34691275b99eda24ed4817e6a28a0a7445-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found\nTrying other mirror.\nhttp://download-ib01.fedoraproject.org/pub/epel/7/x86_64/repodata/d69bfed263e53ff827c9b958f51fbe34691275b99eda24ed4817e6a28a0a7445-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found\nTrying other mirror.\nhttp://mirrors.cat.pdx.edu/epel/7/x86_64/repodata/d69bfed263e53ff827c9b958f51fbe34691275b99eda24ed4817e6a28a0a7445-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found\nTrying other mirror.\nhttp://mirror.pnl.gov/epel/7/x86_64/repodata/d69bfed263e53ff827c9b958f51fbe34691275b99eda24ed4817e6a28a0a7445-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found\nTrying other mirror.\nhttp://ftp.linux.ncsu.edu/pub/epel/7/x86_64/repodata/d69bfed263e53ff827c9b958f51fbe34691275b99eda24ed4817e6a28a0a7445-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found\nTrying other mirror.\nhttp://mirror.oss.ou.edu/epel/7/x86_64/repodata/d69bfed263e53ff827c9b958f51fbe34691275b99eda24ed4817e6a28a0a7445-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found\nTrying other mirror.\nhttp://download-cc-rdu01.fedoraproject.org/pub/epel/7/x86_64/repodata/d69bfed263e53ff827c9b958f51fbe34691275b99eda24ed4817e6a28a0a7445-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found\nTrying other mirror.\nhttp://mirror.pnl.gov/epel/7/x86_64/repodata/d69bfed263e53ff827c9b958f51fbe34691275b99eda24ed4817e6a28a0a7445-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found\nTrying other mirror.\nhttp://mirrors.cat.pdx.edu/epel/7/x86_64/repodata/d69bfed263e53ff827c9b958f51fbe34691275b99eda24ed4817e6a28a0a7445-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found\nTrying other mirror.\nhttp://download-ib01.fedoraproject.org/pub/epel/7/x86_64/repodata/d69bfed263e53ff827c9b958f51fbe34691275b99eda24ed4817e6a28a0a7445-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found\nTrying other mirror.\n\n\n One of the configured repositories failed (Extra Packages for Enterprise Linux),\n and yum doesn\'t have enough cached data to continue. At this point the only\n safe thing yum can do is fail. There are a few ways to work "fix" this:\n\n 1. Contact the upstream for the repository and get them to fix the problem.\n\n 2. Reconfigure the baseurl/etc. for the repository, to point to a working\n upstream. This is most often useful if you are using a newer\n distribution release than is supported by the repository (and the\n packages for the previous distribution release still work).\n\n 3. Run the command with the repository temporarily disabled\n yum --disablerepo=epel ...\n\n 4. Disable the repository permanently, so yum won\'t use it by default. Yum\n will then just ignore the repository until you permanently enable it\n again or use --enablerepo for temporary usage:\n\n yum-config-manager --disable epel\n or\n subscription-manager repos --disable=epel\n\n 5. Configure the failing repository to be skipped, if it is unavailable.\n Note that yum will try to contact the repo. when it runs most commands,\n so will have to try and fail each time (and thus. yum will be be much\n slower). If it is a very temporary problem though, this is often a nice\n compromise:\n\n yum-config-manager --save --setopt=epel.skip_if_unavailable=true\n\nfailure: repodata/d69bfed263e53ff827c9b958f51fbe34691275b99eda24ed4817e6a28a0a7445-primary.sqlite.bz2 from epel: [Errno 256] No more mirrors to try.\nhttp://ftp.linux.ncsu.edu/pub/epel/7/x86_64/repodata/d69bfed263e53ff827c9b958f51fbe34691275b99eda24ed4817e6a28a0a7445-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found\nhttp://mirror.oss.ou.edu/epel/7/x86_64/repodata/d69bfed263e53ff827c9b958f51fbe34691275b99eda24ed4817e6a28a0a7445-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found\nhttp://download-cc-rdu01.fedoraproject.org/pub/epel/7/x86_64/repodata/d69bfed263e53ff827c9b958f51fbe34691275b99eda24ed4817e6a28a0a7445-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found\nhttp://mirror.pnl.gov/epel/7/x86_64/repodata/d69bfed263e53ff827c9b958f51fbe34691275b99eda24ed4817e6a28a0a7445-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found\nhttp://mirrors.cat.pdx.edu/epel/7/x86_64/repodata/d69bfed263e53ff827c9b958f51fbe34691275b99eda24ed4817e6a28a0a7445-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found\nhttp://download-ib01.fedoraproject.org/pub/epel/7/x86_64/repodata/d69bfed263e53ff827c9b958f51fbe34691275b99eda24ed4817e6a28a0a7445-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found\n', '_ansible_no_log': False}}Traceback (most recent call last): File "/home/teuthworker/src/git.ceph.com_git_ceph-cm-ansible_master/callback_plugins/failure_log.py", line 44, in log_failure log.error(yaml.safe_dump(failure)) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/__init__.py", line 306, in safe_dump return dump_all([data], stream, Dumper=SafeDumper, **kwds) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/__init__.py", line 278, in dump_all dumper.represent(data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 27, in represent node = self.represent_data(data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 48, in represent_data node = self.yaml_representers[data_types[0]](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 207, in represent_dict return self.represent_mapping('tag:yaml.org,2002:map', data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 118, in represent_mapping node_value = self.represent_data(item_value) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 48, in represent_data node = self.yaml_representers[data_types[0]](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 207, in represent_dict return self.represent_mapping('tag:yaml.org,2002:map', data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 118, in represent_mapping node_value = self.represent_data(item_value) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 48, in represent_data node = self.yaml_representers[data_types[0]](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 207, in represent_dict return self.represent_mapping('tag:yaml.org,2002:map', data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 118, in represent_mapping node_value = self.represent_data(item_value) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 48, in represent_data node = self.yaml_representers[data_types[0]](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 207, in represent_dict return self.represent_mapping('tag:yaml.org,2002:map', data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 118, in represent_mapping node_value = self.represent_data(item_value) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 48, in represent_data node = self.yaml_representers[data_types[0]](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 199, in represent_list return self.represent_sequence('tag:yaml.org,2002:seq', data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 92, in represent_sequence node_item = self.represent_data(item) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 58, in represent_data node = self.yaml_representers[None](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 231, in represent_undefined raise RepresenterError("cannot represent an object", data)yaml.representer.RepresenterError: ('cannot represent an object', 'epel') |
||||||||||||||
pass | 5079817 | 2020-05-21 23:18:44 | 2020-05-25 06:12:45 | 2020-05-25 07:06:46 | 0:54:01 | 0:11:55 | 0:42:06 | smithi | master | ubuntu | 18.04 | rados/thrash-erasure-code-shec/{ceph.yaml clusters/{fixed-4.yaml openstack.yaml} msgr-failures/osd-delay.yaml objectstore/bluestore-comp-zstd.yaml rados.yaml recovery-overrides/{default.yaml} supported-random-distro$/{ubuntu_latest.yaml} thrashers/careful.yaml thrashosds-health.yaml workloads/ec-rados-plugin=shec-k=4-m=3-c=2.yaml} | 4 | |
fail | 5079818 | 2020-05-21 23:18:45 | 2020-05-25 06:12:45 | 2020-05-25 06:28:45 | 0:16:00 | 0:03:22 | 0:12:38 | smithi | master | ubuntu | 18.04 | rados/cephadm/upgrade/{1-start.yaml 2-start-upgrade.yaml 3-wait.yaml distro$/{ubuntu_18.04_podman.yaml} fixed-2.yaml} | 2 | |
Failure Reason:
Command failed on smithi002 with status 5: 'sudo systemctl stop ceph-df47e428-9e50-11ea-a06a-001a4aab830c@mon.a' |
||||||||||||||
pass | 5079819 | 2020-05-21 23:18:46 | 2020-05-25 06:12:45 | 2020-05-25 06:38:45 | 0:26:00 | 0:18:45 | 0:07:15 | smithi | master | centos | 8.1 | rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/many.yaml msgr/async-v2only.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{centos_8.yaml} tasks/rados_stress_watch.yaml} | 2 | |
pass | 5079820 | 2020-05-21 23:18:47 | 2020-05-25 06:12:45 | 2020-05-25 06:50:45 | 0:38:00 | 0:31:32 | 0:06:28 | smithi | master | rhel | 8.1 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/fastclose.yaml msgr/async-v1only.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{rhel_8.yaml} thrashers/careful.yaml thrashosds-health.yaml workloads/rados_api_tests.yaml} | 2 | |
fail | 5079821 | 2020-05-21 23:18:48 | 2020-05-25 06:14:28 | 2020-05-25 06:32:28 | 0:18:00 | 0:11:59 | 0:06:01 | smithi | master | centos | 8.1 | rados/singleton-nomsgr/{all/balancer.yaml rados.yaml supported-random-distro$/{centos_8.yaml}} | 1 | |
Failure Reason:
"2020-05-25T06:28:33.543101+0000 mon.a (mon.0) 117 : cluster [WRN] Health check failed: Degraded data redundancy: 213/8052 objects degraded (2.645%), 8 pgs degraded (PG_DEGRADED)" in cluster log |
||||||||||||||
pass | 5079822 | 2020-05-21 23:18:49 | 2020-05-25 06:14:31 | 2020-05-25 06:50:31 | 0:36:00 | 0:27:47 | 0:08:13 | smithi | master | ubuntu | 18.04 | rados/singleton/{all/ec-lost-unfound.yaml msgr-failures/many.yaml msgr/async-v2only.yaml objectstore/bluestore-comp-lz4.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml}} | 1 | |
fail | 5079823 | 2020-05-21 23:18:50 | 2020-05-25 06:14:32 | 2020-05-25 06:34:32 | 0:20:00 | 0:13:38 | 0:06:22 | smithi | master | centos | 8.1 | rados/cephadm/workunits/{distro/centos_latest.yaml task/test_adoption.yaml} | 1 | |
Failure Reason:
Command failed (workunit test cephadm/test_adoption.sh) on smithi078 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=617298c201f8bbb9c16ff0a43c13d6dd93f90f82 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_adoption.sh' |
||||||||||||||
pass | 5079824 | 2020-05-21 23:18:51 | 2020-05-25 06:14:38 | 2020-05-25 06:32:37 | 0:17:59 | 0:08:00 | 0:09:59 | smithi | master | ubuntu | 18.04 | rados/multimon/{clusters/3.yaml msgr-failures/many.yaml msgr/async-v1only.yaml no_pools.yaml objectstore/bluestore-comp-zlib.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} tasks/mon_clock_no_skews.yaml} | 2 | |
pass | 5079825 | 2020-05-21 23:18:52 | 2020-05-25 06:14:38 | 2020-05-25 06:42:38 | 0:28:00 | 0:11:02 | 0:16:58 | smithi | master | ubuntu | 18.04 | rados/thrash-erasure-code-big/{ceph.yaml cluster/{12-osds.yaml openstack.yaml} msgr-failures/fastclose.yaml objectstore/bluestore-comp-zlib.yaml rados.yaml recovery-overrides/{more-async-partial-recovery.yaml} supported-random-distro$/{ubuntu_latest.yaml} thrashers/fastread.yaml thrashosds-health.yaml workloads/ec-rados-plugin=lrc-k=4-m=2-l=3.yaml} | 3 | |
pass | 5079826 | 2020-05-21 23:18:53 | 2020-05-25 06:14:38 | 2020-05-25 07:12:39 | 0:58:01 | 0:47:04 | 0:10:57 | smithi | master | centos | 8.1 | rados/verify/{centos_latest.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-thrash/none.yaml msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/bluestore-comp-zlib.yaml rados.yaml tasks/rados_api_tests.yaml validater/valgrind.yaml} | 2 | |
pass | 5079827 | 2020-05-21 23:18:54 | 2020-05-25 06:16:12 | 2020-05-25 06:58:13 | 0:42:01 | 0:25:14 | 0:16:47 | smithi | master | ubuntu | 18.04 | rados/thrash-erasure-code-isa/{arch/x86_64.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/fastclose.yaml objectstore/bluestore-comp-zlib.yaml rados.yaml recovery-overrides/{more-async-partial-recovery.yaml} supported-random-distro$/{ubuntu_latest.yaml} thrashers/mapgap.yaml thrashosds-health.yaml workloads/ec-rados-plugin=isa-k=2-m=1.yaml} | 2 | |
pass | 5079828 | 2020-05-21 23:18:55 | 2020-05-25 06:16:16 | 2020-05-25 06:50:16 | 0:34:00 | 0:23:29 | 0:10:31 | smithi | master | rhel | 8.1 | rados/thrash-erasure-code/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/normal.yaml msgr-failures/fastclose.yaml objectstore/bluestore-comp-zlib.yaml rados.yaml recovery-overrides/{default.yaml} supported-random-distro$/{rhel_8.yaml} thrashers/minsize_recovery.yaml thrashosds-health.yaml workloads/ec-small-objects-fast-read.yaml} | 2 | |
fail | 5079829 | 2020-05-21 23:18:55 | 2020-05-25 06:16:16 | 2020-05-25 07:12:17 | 0:56:01 | 0:33:02 | 0:22:59 | smithi | master | ubuntu | 18.04 | rados/cephadm/with-work/{distro/ubuntu_18.04_podman.yaml fixed-2.yaml mode/root.yaml msgr/async-v2only.yaml start.yaml tasks/rados_api_tests.yaml} | 2 | |
Failure Reason:
'/home/ubuntu/cephtest/archive/syslog/misc.log:2020-05-25T06:49:47.415925+00:00 smithi002 bash[17977]: debug 2020-05-25T06:49:47.412+0000 7fb37527e700 -1 log_channel(cephadm) log [ERR] : cephadm exited with an error code: 1, stderr:INFO:cephadm:Deploy daemon prometheus.a ... ' in syslog |
||||||||||||||
pass | 5079830 | 2020-05-21 23:18:56 | 2020-05-25 06:16:23 | 2020-05-25 06:36:23 | 0:20:00 | 0:12:13 | 0:07:47 | smithi | master | ubuntu | 18.04 | rados/perf/{ceph.yaml objectstore/bluestore-comp.yaml openstack.yaml settings/optimized.yaml ubuntu_latest.yaml workloads/radosbench_4M_rand_read.yaml} | 1 | |
dead | 5079831 | 2020-05-21 23:18:57 | 2020-05-25 06:16:24 | 2020-05-25 18:18:58 | 12:02:34 | smithi | master | centos | 8.1 | rados/thrash-erasure-code-overwrites/{bluestore-bitmap.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/normal.yaml msgr-failures/fastclose.yaml rados.yaml recovery-overrides/{more-active-recovery.yaml} supported-random-distro$/{centos_8.yaml} thrashers/minsize_recovery.yaml thrashosds-health.yaml workloads/ec-small-objects-overwrites.yaml} | 2 | |||
pass | 5079832 | 2020-05-21 23:18:59 | 2020-05-25 06:16:32 | 2020-05-25 06:34:32 | 0:18:00 | 0:08:52 | 0:09:08 | smithi | master | ubuntu | 18.04 | rados/singleton/{all/erasure-code-nonregression.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore-comp-snappy.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml}} | 1 | |
pass | 5079833 | 2020-05-21 23:19:00 | 2020-05-25 06:16:32 | 2020-05-25 06:58:33 | 0:42:01 | 0:29:42 | 0:12:19 | smithi | master | ubuntu | 18.04 | rados/monthrash/{ceph.yaml clusters/9-mons.yaml msgr-failures/mon-delay.yaml msgr/async-v1only.yaml objectstore/bluestore-comp-zlib.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} thrashers/many.yaml workloads/rados_api_tests.yaml} | 2 | |
pass | 5079834 | 2020-05-21 23:19:00 | 2020-05-25 06:16:44 | 2020-05-25 07:20:45 | 1:04:01 | 0:46:58 | 0:17:03 | smithi | master | ubuntu | 18.04 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/on.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/radosbench-high-concurrency.yaml} | 2 | |
pass | 5079835 | 2020-05-21 23:19:01 | 2020-05-25 06:16:53 | 2020-05-25 06:42:53 | 0:26:00 | 0:12:46 | 0:13:14 | smithi | master | centos | 8.0 | rados/cephadm/smoke/{distro/centos_8.0.yaml fixed-2.yaml start.yaml} | 2 | |
pass | 5079836 | 2020-05-21 23:19:02 | 2020-05-25 06:16:55 | 2020-05-25 06:46:55 | 0:30:00 | 0:12:16 | 0:17:44 | smithi | master | centos | 8.1 | rados/mgr/{clusters/{2-node-mgr.yaml} debug/mgr.yaml objectstore/bluestore-comp-zstd.yaml supported-random-distro$/{centos_8.yaml} tasks/failover.yaml} | 2 | |
dead | 5079837 | 2020-05-21 23:19:03 | 2020-05-25 06:28:58 | 2020-05-25 06:46:56 | 0:17:58 | 0:02:44 | 0:15:14 | smithi | master | centos | 7.6 | rados/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-install/nautilus-v1only.yaml backoff/peering_and_degraded.yaml ceph.yaml clusters/{openstack.yaml three-plus-one.yaml} d-balancer/crush-compat.yaml distro$/{centos_7.6.yaml} msgr-failures/osd-delay.yaml rados.yaml thrashers/mapgap.yaml thrashosds-health.yaml workloads/cache-snaps.yaml} | 3 | |
Failure Reason:
o be skipped, if it is unavailable.\n Note that yum will try to contact the repo. when it runs most commands,\n so will have to try and fail each time (and thus. yum will be be much\n slower). If it is a very temporary problem though, this is often a nice\n compromise:\n\n yum-config-manager --save --setopt=epel.skip_if_unavailable=true\n\nfailure: repodata/d69bfed263e53ff827c9b958f51fbe34691275b99eda24ed4817e6a28a0a7445-primary.sqlite.bz2 from epel: [Errno 256] No more mirrors to try.\nhttp://ftp.linux.ncsu.edu/pub/epel/7/x86_64/repodata/d69bfed263e53ff827c9b958f51fbe34691275b99eda24ed4817e6a28a0a7445-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found\nhttp://mirror.oss.ou.edu/epel/7/x86_64/repodata/d69bfed263e53ff827c9b958f51fbe34691275b99eda24ed4817e6a28a0a7445-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found\nhttp://mirror.pnl.gov/epel/7/x86_64/repodata/d69bfed263e53ff827c9b958f51fbe34691275b99eda24ed4817e6a28a0a7445-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found\nhttp://mirrors.cat.pdx.edu/epel/7/x86_64/repodata/d69bfed263e53ff827c9b958f51fbe34691275b99eda24ed4817e6a28a0a7445-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found\nhttp://download-ib01.fedoraproject.org/pub/epel/7/x86_64/repodata/d69bfed263e53ff827c9b958f51fbe34691275b99eda24ed4817e6a28a0a7445-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found\nhttp://download-cc-rdu01.fedoraproject.org/pub/epel/7/x86_64/repodata/d69bfed263e53ff827c9b958f51fbe34691275b99eda24ed4817e6a28a0a7445-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found\n', '_ansible_no_log': False}}Traceback (most recent call last): File "/home/teuthworker/src/git.ceph.com_git_ceph-cm-ansible_master/callback_plugins/failure_log.py", line 44, in log_failure log.error(yaml.safe_dump(failure)) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/__init__.py", line 306, in safe_dump return dump_all([data], stream, Dumper=SafeDumper, **kwds) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/__init__.py", line 278, in dump_all dumper.represent(data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 27, in represent node = self.represent_data(data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 48, in represent_data node = self.yaml_representers[data_types[0]](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 207, in represent_dict return self.represent_mapping('tag:yaml.org,2002:map', data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 118, in represent_mapping node_value = self.represent_data(item_value) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 48, in represent_data node = self.yaml_representers[data_types[0]](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 207, in represent_dict return self.represent_mapping('tag:yaml.org,2002:map', data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 118, in represent_mapping node_value = self.represent_data(item_value) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 48, in represent_data node = self.yaml_representers[data_types[0]](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 207, in represent_dict return self.represent_mapping('tag:yaml.org,2002:map', data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 118, in represent_mapping node_value = self.represent_data(item_value) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 48, in represent_data node = self.yaml_representers[data_types[0]](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 207, in represent_dict return self.represent_mapping('tag:yaml.org,2002:map', data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 118, in represent_mapping node_value = self.represent_data(item_value) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 48, in represent_data node = self.yaml_representers[data_types[0]](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 199, in represent_list return self.represent_sequence('tag:yaml.org,2002:seq', data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 92, in represent_sequence node_item = self.represent_data(item) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 58, in represent_data node = self.yaml_representers[None](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 231, in represent_undefined raise RepresenterError("cannot represent an object", data)yaml.representer.RepresenterError: ('cannot represent an object', 'epel')Failure object was: {'smithi122.front.sepia.ceph.com': {'changed': False, 'results': [], 'rc': 1, 'invocation': {'module_args': {'lock_timeout': 0, 'update_cache': False, 'disable_excludes': None, 'exclude': [], 'allow_downgrade': False, 'disable_gpg_check': False, 'conf_file': None, 'use_backend': 'auto', 'state': 'latest', 'disablerepo': [], 'releasever': None, 'skip_broken': False, 'autoremove': False, 'download_dir': None, 'enable_plugin': [], 'installroot': '/', 'install_weak_deps': True, 'name': ['nagios-common', 'nrpe', 'nagios-plugins', 'nagios-plugins-load'], 'download_only': False, 'bugfix': False, 'list': None, 'install_repoquery': True, 'update_only': False, 'disable_plugin': [], 'enablerepo': ['epel'], 'security': False, 'validate_certs': True}}, 'msg': 'http://ftp.linux.ncsu.edu/pub/epel/7/x86_64/repodata/2a6554f8a72afcf0952727cbb8679d135408b0b3b630497cf9d321134cd85859-comps-Everything.x86_64.xml.gz: [Errno 14] HTTP Error 404 - Not Found\nTrying other mirror.\nTo address this issue please refer to the below wiki article \n\nhttps://wiki.centos.org/yum-errors\n\nIf above article doesn\'t help to resolve this issue please use https://bugs.centos.org/.\n\nhttp://ftp.linux.ncsu.edu/pub/epel/7/x86_64/repodata/9de8acce70ccee52b3b4adaf87c0b4233fe3d06a0bb0145aec34da6cf5659574-updateinfo.xml.bz2: [Errno 14] HTTP Error 404 - Not Found\nTrying other mirror.\nhttp://ftp.linux.ncsu.edu/pub/epel/7/x86_64/repodata/d69bfed263e53ff827c9b958f51fbe34691275b99eda24ed4817e6a28a0a7445-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found\nTrying other mirror.\nhttp://mirror.oss.ou.edu/epel/7/x86_64/repodata/2a6554f8a72afcf0952727cbb8679d135408b0b3b630497cf9d321134cd85859-comps-Everything.x86_64.xml.gz: [Errno 14] HTTP Error 404 - Not Found\nTrying other mirror.\nhttp://mirror.oss.ou.edu/epel/7/x86_64/repodata/9de8acce70ccee52b3b4adaf87c0b4233fe3d06a0bb0145aec34da6cf5659574-updateinfo.xml.bz2: [Errno 14] HTTP Error 404 - Not Found\nTrying other mirror.\nhttp://download-cc-rdu01.fedoraproject.org/pub/epel/7/x86_64/repodata/2a6554f8a72afcf0952727cbb8679d135408b0b3b630497cf9d321134cd85859-comps-Everything.x86_64.xml.gz: [Errno 14] HTTP Error 404 - Not Found\nTrying other mirror.\nhttp://mirror.oss.ou.edu/epel/7/x86_64/repodata/d69bfed263e53ff827c9b958f51fbe34691275b99eda24ed4817e6a28a0a7445-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found\nTrying other mirror.\nhttp://download-ib01.fedoraproject.org/pub/epel/7/x86_64/repodata/9de8acce70ccee52b3b4adaf87c0b4233fe3d06a0bb0145aec34da6cf5659574-updateinfo.xml.bz2: [Errno 14] HTTP Error 404 - Not Found\nTrying other mirror.\nhttp://mirrors.cat.pdx.edu/epel/7/x86_64/repodata/2a6554f8a72afcf0952727cbb8679d135408b0b3b630497cf9d321134cd85859-comps-Everything.x86_64.xml.gz: [Errno 14] HTTP Error 404 - Not Found\nTrying other mirror.\nhttp://mirror.pnl.gov/epel/7/x86_64/repodata/d69bfed263e53ff827c9b958f51fbe34691275b99eda24ed4817e6a28a0a7445-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found\nTrying other mirror.\nhttp://download-cc-rdu01.fedoraproject.org/pub/epel/7/x86_64/repodata/d69bfed263e53ff827c9b958f51fbe34691275b99eda24ed4817e6a28a0a7445-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found\nTrying other mirror.\nhttp://mirrors.cat.pdx.edu/epel/7/x86_64/repodata/9de8acce70ccee52b3b4adaf87c0b4233fe3d06a0bb0145aec34da6cf5659574-updateinfo.xml.bz2: [Errno 14] HTTP Error 404 - Not Found\nTrying other mirror.\nhttp://download-cc-rdu01.fedoraproject.org/pub/epel/7/x86_64/repodata/9de8acce70ccee52b3b4adaf87c0b4233fe3d06a0bb0145aec34da6cf5659574-updateinfo.xml.bz2: [Errno 14] HTTP Error 404 - Not Found\nTrying other mirror.\nhttp://mirror.pnl.gov/epel/7/x86_64/repodata/2a6554f8a72afcf0952727cbb8679d135408b0b3b630497cf9d321134cd85859-comps-Everything.x86_64.xml.gz: [Errno 14] HTTP Error 404 - Not Found\nTrying other mirror.\nhttp://download-ib01.fedoraproject.org/pub/epel/7/x86_64/repodata/d69bfed263e53ff827c9b958f51fbe34691275b99eda24ed4817e6a28a0a7445-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found\nTrying other mirror.\nhttp://download-ib01.fedoraproject.org/pub/epel/7/x86_64/repodata/2a6554f8a72afcf0952727cbb8679d135408b0b3b630497cf9d321134cd85859-comps-Everything.x86_64.xml.gz: [Errno 14] HTTP Error 404 - Not Found\nTrying other mirror.\nhttp://mirrors.cat.pdx.edu/epel/7/x86_64/repodata/d69bfed263e53ff827c9b958f51fbe34691275b99eda24ed4817e6a28a0a7445-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found\nTrying other mirror.\nhttp://mirror.pnl.gov/epel/7/x86_64/repodata/9de8acce70ccee52b3b4adaf87c0b4233fe3d06a0bb0145aec34da6cf5659574-updateinfo.xml.bz2: [Errno 14] HTTP Error 404 - Not Found\nTrying other mirror.\nhttp://ftp.linux.ncsu.edu/pub/epel/7/x86_64/repodata/d69bfed263e53ff827c9b958f51fbe34691275b99eda24ed4817e6a28a0a7445-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found\nTrying other mirror.\nhttp://mirror.oss.ou.edu/epel/7/x86_64/repodata/d69bfed263e53ff827c9b958f51fbe34691275b99eda24ed4817e6a28a0a7445-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found\nTrying other mirror.\nhttp://download-cc-rdu01.fedoraproject.org/pub/epel/7/x86_64/repodata/d69bfed263e53ff827c9b958f51fbe34691275b99eda24ed4817e6a28a0a7445-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found\nTrying other mirror.\nhttp://download-ib01.fedoraproject.org/pub/epel/7/x86_64/repodata/d69bfed263e53ff827c9b958f51fbe34691275b99eda24ed4817e6a28a0a7445-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found\nTrying other mirror.\nhttp://mirrors.cat.pdx.edu/epel/7/x86_64/repodata/d69bfed263e53ff827c9b958f51fbe34691275b99eda24ed4817e6a28a0a7445-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found\nTrying other mirror.\nhttp://mirror.pnl.gov/epel/7/x86_64/repodata/d69bfed263e53ff827c9b958f51fbe34691275b99eda24ed4817e6a28a0a7445-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found\nTrying other mirror.\nhttp://ftp.linux.ncsu.edu/pub/epel/7/x86_64/repodata/d69bfed263e53ff827c9b958f51fbe34691275b99eda24ed4817e6a28a0a7445-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found\nTrying other mirror.\nhttp://mirror.oss.ou.edu/epel/7/x86_64/repodata/d69bfed263e53ff827c9b958f51fbe34691275b99eda24ed4817e6a28a0a7445-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found\nTrying other mirror.\nhttp://mirror.pnl.gov/epel/7/x86_64/repodata/d69bfed263e53ff827c9b958f51fbe34691275b99eda24ed4817e6a28a0a7445-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found\nTrying other mirror.\nhttp://mirrors.cat.pdx.edu/epel/7/x86_64/repodata/d69bfed263e53ff827c9b958f51fbe34691275b99eda24ed4817e6a28a0a7445-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found\nTrying other mirror.\nhttp://download-ib01.fedoraproject.org/pub/epel/7/x86_64/repodata/d69bfed263e53ff827c9b958f51fbe34691275b99eda24ed4817e6a28a0a7445-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found\nTrying other mirror.\nhttp://download-cc-rdu01.fedoraproject.org/pub/epel/7/x86_64/repodata/d69bfed263e53ff827c9b958f51fbe34691275b99eda24ed4817e6a28a0a7445-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found\nTrying other mirror.\n\n\n One of the configured repositories failed (Extra Packages for Enterprise Linux),\n and yum doesn\'t have enough cached data to continue. At this point the only\n safe thing yum can do is fail. There are a few ways to work "fix" this:\n\n 1. Contact the upstream for the repository and get them to fix the problem.\n\n 2. Reconfigure the baseurl/etc. for the repository, to point to a working\n upstream. This is most often useful if you are using a newer\n distribution release than is supported by the repository (and the\n packages for the previous distribution release still work).\n\n 3. Run the command with the repository temporarily disabled\n yum --disablerepo=epel ...\n\n 4. Disable the repository permanently, so yum won\'t use it by default. Yum\n will then just ignore the repository until you permanently enable it\n again or use --enablerepo for temporary usage:\n\n yum-config-manager --disable epel\n or\n subscription-manager repos --disable=epel\n\n 5. Configure the failing repository to be skipped, if it is unavailable.\n Note that yum will try to contact the repo. when it runs most commands,\n so will have to try and fail each time (and thus. yum will be be much\n slower). If it is a very temporary problem though, this is often a nice\n compromise:\n\n yum-config-manager --save --setopt=epel.skip_if_unavailable=true\n\nfailure: repodata/d69bfed263e53ff827c9b958f51fbe34691275b99eda24ed4817e6a28a0a7445-primary.sqlite.bz2 from epel: [Errno 256] No more mirrors to try.\nhttp://ftp.linux.ncsu.edu/pub/epel/7/x86_64/repodata/d69bfed263e53ff827c9b958f51fbe34691275b99eda24ed4817e6a28a0a7445-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found\nhttp://mirror.oss.ou.edu/epel/7/x86_64/repodata/d69bfed263e53ff827c9b958f51fbe34691275b99eda24ed4817e6a28a0a7445-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found\nhttp://mirror.pnl.gov/epel/7/x86_64/repodata/d69bfed263e53ff827c9b958f51fbe34691275b99eda24ed4817e6a28a0a7445-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found\nhttp://mirrors.cat.pdx.edu/epel/7/x86_64/repodata/d69bfed263e53ff827c9b958f51fbe34691275b99eda24ed4817e6a28a0a7445-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found\nhttp://download-ib01.fedoraproject.org/pub/epel/7/x86_64/repodata/d69bfed263e53ff827c9b958f51fbe34691275b99eda24ed4817e6a28a0a7445-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found\nhttp://download-cc-rdu01.fedoraproject.org/pub/epel/7/x86_64/repodata/d69bfed263e53ff827c9b958f51fbe34691275b99eda24ed4817e6a28a0a7445-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found\n', '_ansible_no_log': False}}Traceback (most recent call last): File "/home/teuthworker/src/git.ceph.com_git_ceph-cm-ansible_master/callback_plugins/failure_log.py", line 44, in log_failure log.error(yaml.safe_dump(failure)) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/__init__.py", line 306, in safe_dump return dump_all([data], stream, Dumper=SafeDumper, **kwds) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/__init__.py", line 278, in dump_all dumper.represent(data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 27, in represent node = self.represent_data(data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 48, in represent_data node = self.yaml_representers[data_types[0]](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 207, in represent_dict return self.represent_mapping('tag:yaml.org,2002:map', data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 118, in represent_mapping node_value = self.represent_data(item_value) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 48, in represent_data node = self.yaml_representers[data_types[0]](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 207, in represent_dict return self.represent_mapping('tag:yaml.org,2002:map', data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 118, in represent_mapping node_value = self.represent_data(item_value) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 48, in represent_data node = self.yaml_representers[data_types[0]](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 207, in represent_dict return self.represent_mapping('tag:yaml.org,2002:map', data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 118, in represent_mapping node_value = self.represent_data(item_value) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 48, in represent_data node = self.yaml_representers[data_types[0]](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 207, in represent_dict return self.represent_mapping('tag:yaml.org,2002:map', data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 118, in represent_mapping node_value = self.represent_data(item_value) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 48, in represent_data node = self.yaml_representers[data_types[0]](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 199, in represent_list return self.represent_sequence('tag:yaml.org,2002:seq', data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 92, in represent_sequence node_item = self.represent_data(item) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 58, in represent_data node = self.yaml_representers[None](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 231, in represent_undefined raise RepresenterError("cannot represent an object", data)yaml.representer.RepresenterError: ('cannot represent an object', 'epel')Failure object was: {'smithi022.front.sepia.ceph.com': {'changed': False, 'results': [], 'rc': 1, 'invocation': {'module_args': {'lock_timeout': 0, 'update_cache': False, 'disable_excludes': None, 'exclude': [], 'allow_downgrade': False, 'disable_gpg_check': False, 'conf_file': None, 'use_backend': 'auto', 'state': 'latest', 'disablerepo': [], 'releasever': None, 'skip_broken': False, 'autoremove': False, 'download_dir': None, 'enable_plugin': [], 'installroot': '/', 'install_weak_deps': True, 'name': ['nagios-common', 'nrpe', 'nagios-plugins', 'nagios-plugins-load'], 'download_only': False, 'bugfix': False, 'list': None, 'install_repoquery': True, 'update_only': False, 'disable_plugin': [], 'enablerepo': ['epel'], 'security': False, 'validate_certs': True}}, 'msg': 'http://ftp.linux.ncsu.edu/pub/epel/7/x86_64/repodata/2a6554f8a72afcf0952727cbb8679d135408b0b3b630497cf9d321134cd85859-comps-Everything.x86_64.xml.gz: [Errno 14] HTTP Error 404 - Not Found\nTrying other mirror.\nTo address this issue please refer to the below wiki article \n\nhttps://wiki.centos.org/yum-errors\n\nIf above article doesn\'t help to resolve this issue please use https://bugs.centos.org/.\n\nhttp://ftp.linux.ncsu.edu/pub/epel/7/x86_64/repodata/9de8acce70ccee52b3b4adaf87c0b4233fe3d06a0bb0145aec34da6cf5659574-updateinfo.xml.bz2: [Errno 14] HTTP Error 404 - Not Found\nTrying other mirror.\nhttp://ftp.linux.ncsu.edu/pub/epel/7/x86_64/repodata/d69bfed263e53ff827c9b958f51fbe34691275b99eda24ed4817e6a28a0a7445-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found\nTrying other mirror.\nhttp://mirror.oss.ou.edu/epel/7/x86_64/repodata/2a6554f8a72afcf0952727cbb8679d135408b0b3b630497cf9d321134cd85859-comps-Everything.x86_64.xml.gz: [Errno 14] HTTP Error 404 - Not Found\nTrying other mirror.\nhttp://mirror.oss.ou.edu/epel/7/x86_64/repodata/9de8acce70ccee52b3b4adaf87c0b4233fe3d06a0bb0145aec34da6cf5659574-updateinfo.xml.bz2: [Errno 14] HTTP Error 404 - Not Found\nTrying other mirror.\nhttp://download-cc-rdu01.fedoraproject.org/pub/epel/7/x86_64/repodata/2a6554f8a72afcf0952727cbb8679d135408b0b3b630497cf9d321134cd85859-comps-Everything.x86_64.xml.gz: [Errno 14] HTTP Error 404 - Not Found\nTrying other mirror.\nhttp://mirror.oss.ou.edu/epel/7/x86_64/repodata/d69bfed263e53ff827c9b958f51fbe34691275b99eda24ed4817e6a28a0a7445-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found\nTrying other mirror.\nhttp://download-ib01.fedoraproject.org/pub/epel/7/x86_64/repodata/9de8acce70ccee52b3b4adaf87c0b4233fe3d06a0bb0145aec34da6cf5659574-updateinfo.xml.bz2: [Errno 14] HTTP Error 404 - Not Found\nTrying other mirror.\nhttp://mirrors.cat.pdx.edu/epel/7/x86_64/repodata/2a6554f8a72afcf0952727cbb8679d135408b0b3b630497cf9d321134cd85859-comps-Everything.x86_64.xml.gz: [Errno 14] HTTP Error 404 - Not Found\nTrying other mirror.\nhttp://mirror.pnl.gov/epel/7/x86_64/repodata/d69bfed263e53ff827c9b958f51fbe34691275b99eda24ed4817e6a28a0a7445-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found\nTrying other mirror.\nhttp://mirrors.cat.pdx.edu/epel/7/x86_64/repodata/9de8acce70ccee52b3b4adaf87c0b4233fe3d06a0bb0145aec34da6cf5659574-updateinfo.xml.bz2: [Errno 14] HTTP Error 404 - Not Found\nTrying other mirror.\nhttp://download-cc-rdu01.fedoraproject.org/pub/epel/7/x86_64/repodata/d69bfed263e53ff827c9b958f51fbe34691275b99eda24ed4817e6a28a0a7445-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found\nTrying other mirror.\nhttp://download-cc-rdu01.fedoraproject.org/pub/epel/7/x86_64/repodata/9de8acce70ccee52b3b4adaf87c0b4233fe3d06a0bb0145aec34da6cf5659574-updateinfo.xml.bz2: [Errno 14] HTTP Error 404 - Not Found\nTrying other mirror.\nhttp://download-ib01.fedoraproject.org/pub/epel/7/x86_64/repodata/d69bfed263e53ff827c9b958f51fbe34691275b99eda24ed4817e6a28a0a7445-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found\nTrying other mirror.\nhttp://mirror.pnl.gov/epel/7/x86_64/repodata/2a6554f8a72afcf0952727cbb8679d135408b0b3b630497cf9d321134cd85859-comps-Everything.x86_64.xml.gz: [Errno 14] HTTP Error 404 - Not Found\nTrying other mirror.\nhttp://download-ib01.fedoraproject.org/pub/epel/7/x86_64/repodata/2a6554f8a72afcf0952727cbb8679d135408b0b3b630497cf9d321134cd85859-comps-Everything.x86_64.xml.gz: [Errno 14] HTTP Error 404 - Not Found\nTrying other mirror.\nhttp://mirrors.cat.pdx.edu/epel/7/x86_64/repodata/d69bfed263e53ff827c9b958f51fbe34691275b99eda24ed4817e6a28a0a7445-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found\nTrying other mirror.\nhttp://mirror.pnl.gov/epel/7/x86_64/repodata/9de8acce70ccee52b3b4adaf87c0b4233fe3d06a0bb0145aec34da6cf5659574-updateinfo.xml.bz2: [Errno 14] HTTP Error 404 - Not Found\nTrying other mirror.\nhttp://ftp.linux.ncsu.edu/pub/epel/7/x86_64/repodata/d69bfed263e53ff827c9b958f51fbe34691275b99eda24ed4817e6a28a0a7445-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found\nTrying other mirror.\nhttp://mirror.oss.ou.edu/epel/7/x86_64/repodata/d69bfed263e53ff827c9b958f51fbe34691275b99eda24ed4817e6a28a0a7445-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found\nTrying other mirror.\nhttp://download-cc-rdu01.fedoraproject.org/pub/epel/7/x86_64/repodata/d69bfed263e53ff827c9b958f51fbe34691275b99eda24ed4817e6a28a0a7445-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found\nTrying other mirror.\nhttp://download-ib01.fedoraproject.org/pub/epel/7/x86_64/repodata/d69bfed263e53ff827c9b958f51fbe34691275b99eda24ed4817e6a28a0a7445-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found\nTrying other mirror.\nhttp://mirrors.cat.pdx.edu/epel/7/x86_64/repodata/d69bfed263e53ff827c9b958f51fbe34691275b99eda24ed4817e6a28a0a7445-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found\nTrying other mirror.\nhttp://mirror.pnl.gov/epel/7/x86_64/repodata/d69bfed263e53ff827c9b958f51fbe34691275b99eda24ed4817e6a28a0a7445-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found\nTrying other mirror.\nhttp://ftp.linux.ncsu.edu/pub/epel/7/x86_64/repodata/d69bfed263e53ff827c9b958f51fbe34691275b99eda24ed4817e6a28a0a7445-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found\nTrying other mirror.\nhttp://mirror.oss.ou.edu/epel/7/x86_64/repodata/d69bfed263e53ff827c9b958f51fbe34691275b99eda24ed4817e6a28a0a7445-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found\nTrying other mirror.\nhttp://download-ib01.fedoraproject.org/pub/epel/7/x86_64/repodata/d69bfed263e53ff827c9b958f51fbe34691275b99eda24ed4817e6a28a0a7445-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found\nTrying other mirror.\nhttp://download-cc-rdu01.fedoraproject.org/pub/epel/7/x86_64/repodata/d69bfed263e53ff827c9b958f51fbe34691275b99eda24ed4817e6a28a0a7445-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found\nTrying other mirror.\nhttp://mirror.pnl.gov/epel/7/x86_64/repodata/d69bfed263e53ff827c9b958f51fbe34691275b99eda24ed4817e6a28a0a7445-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found\nTrying other mirror.\nhttp://mirrors.cat.pdx.edu/epel/7/x86_64/repodata/d69bfed263e53ff827c9b958f51fbe34691275b99eda24ed4817e6a28a0a7445-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found\nTrying other mirror.\n\n\n One of the configured repositories failed (Extra Packages for Enterprise Linux),\n and yum doesn\'t have enough cached data to continue. At this point the only\n safe thing yum can do is fail. There are a few ways to work "fix" this:\n\n 1. Contact the upstream for the repository and get them to fix the problem.\n\n 2. Reconfigure the baseurl/etc. for the repository, to point to a working\n upstream. This is most often useful if you are using a newer\n distribution release than is supported by the repository (and the\n packages for the previous distribution release still work).\n\n 3. Run the command with the repository temporarily disabled\n yum --disablerepo=epel ...\n\n 4. Disable the repository permanently, so yum won\'t use it by default. Yum\n will then just ignore the repository until you permanently enable it\n again or use --enablerepo for temporary usage:\n\n yum-config-manager --disable epel\n or\n subscription-manager repos --disable=epel\n\n 5. Configure the failing repository to be skipped, if it is unavailable.\n Note that yum will try to contact the repo. when it runs most commands,\n so will have to try and fail each time (and thus. yum will be be much\n slower). If it is a very temporary problem though, this is often a nice\n compromise:\n\n yum-config-manager --save --setopt=epel.skip_if_unavailable=true\n\nfailure: repodata/d69bfed263e53ff827c9b958f51fbe34691275b99eda24ed4817e6a28a0a7445-primary.sqlite.bz2 from epel: [Errno 256] No more mirrors to try.\nhttp://ftp.linux.ncsu.edu/pub/epel/7/x86_64/repodata/d69bfed263e53ff827c9b958f51fbe34691275b99eda24ed4817e6a28a0a7445-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found\nhttp://mirror.oss.ou.edu/epel/7/x86_64/repodata/d69bfed263e53ff827c9b958f51fbe34691275b99eda24ed4817e6a28a0a7445-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found\nhttp://download-ib01.fedoraproject.org/pub/epel/7/x86_64/repodata/d69bfed263e53ff827c9b958f51fbe34691275b99eda24ed4817e6a28a0a7445-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found\nhttp://download-cc-rdu01.fedoraproject.org/pub/epel/7/x86_64/repodata/d69bfed263e53ff827c9b958f51fbe34691275b99eda24ed4817e6a28a0a7445-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found\nhttp://mirror.pnl.gov/epel/7/x86_64/repodata/d69bfed263e53ff827c9b958f51fbe34691275b99eda24ed4817e6a28a0a7445-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found\nhttp://mirrors.cat.pdx.edu/epel/7/x86_64/repodata/d69bfed263e53ff827c9b958f51fbe34691275b99eda24ed4817e6a28a0a7445-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found\n', '_ansible_no_log': False}}Traceback (most recent call last): File "/home/teuthworker/src/git.ceph.com_git_ceph-cm-ansible_master/callback_plugins/failure_log.py", line 44, in log_failure log.error(yaml.safe_dump(failure)) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/__init__.py", line 306, in safe_dump return dump_all([data], stream, Dumper=SafeDumper, **kwds) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/__init__.py", line 278, in dump_all dumper.represent(data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 27, in represent node = self.represent_data(data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 48, in represent_data node = self.yaml_representers[data_types[0]](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 207, in represent_dict return self.represent_mapping('tag:yaml.org,2002:map', data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 118, in represent_mapping node_value = self.represent_data(item_value) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 48, in represent_data node = self.yaml_representers[data_types[0]](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 207, in represent_dict return self.represent_mapping('tag:yaml.org,2002:map', data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 118, in represent_mapping node_value = self.represent_data(item_value) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 48, in represent_data node = self.yaml_representers[data_types[0]](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 207, in represent_dict return self.represent_mapping('tag:yaml.org,2002:map', data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 118, in represent_mapping node_value = self.represent_data(item_value) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 48, in represent_data node = self.yaml_representers[data_types[0]](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 207, in represent_dict return self.represent_mapping('tag:yaml.org,2002:map', data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 118, in represent_mapping node_value = self.represent_data(item_value) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 48, in represent_data node = self.yaml_representers[data_types[0]](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 199, in represent_list return self.represent_sequence('tag:yaml.org,2002:seq', data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 92, in represent_sequence node_item = self.represent_data(item) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 58, in represent_data node = self.yaml_representers[None](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 231, in represent_undefined raise RepresenterError("cannot represent an object", data)yaml.representer.RepresenterError: ('cannot represent an object', 'epel') |
||||||||||||||
fail | 5079838 | 2020-05-21 23:19:04 | 2020-05-25 06:30:23 | 2020-05-25 07:16:23 | 0:46:00 | 0:33:25 | 0:12:35 | smithi | master | centos | 8.0 | rados/cephadm/smoke-roleless/{distro/centos_8.0.yaml start.yaml} | 2 | |
Failure Reason:
timeout expired in wait_until_healthy |
||||||||||||||
pass | 5079839 | 2020-05-21 23:19:05 | 2020-05-25 06:30:32 | 2020-05-25 07:40:33 | 1:10:01 | 1:00:10 | 0:09:51 | smithi | master | centos | 8.1 | rados/dashboard/{clusters/{2-node-mgr.yaml} debug/mgr.yaml objectstore/bluestore-comp-lz4.yaml supported-random-distro$/{centos_8.yaml} tasks/dashboard.yaml} | 2 | |
pass | 5079840 | 2020-05-21 23:19:06 | 2020-05-25 06:30:54 | 2020-05-25 06:52:54 | 0:22:00 | 0:12:38 | 0:09:22 | smithi | master | ubuntu | 18.04 | rados/objectstore/{backends/ceph_objectstore_tool.yaml supported-random-distro$/{ubuntu_latest.yaml}} | 1 | |
pass | 5079841 | 2020-05-21 23:19:07 | 2020-05-25 06:50:39 | 2020-05-25 07:30:39 | 0:40:00 | 0:32:20 | 0:07:40 | smithi | master | rhel | 8.1 | rados/singleton/{all/lost-unfound-delete.yaml msgr-failures/many.yaml msgr/async-v1only.yaml objectstore/bluestore-comp-zlib.yaml rados.yaml supported-random-distro$/{rhel_8.yaml}} | 1 | |
pass | 5079842 | 2020-05-21 23:19:08 | 2020-05-25 06:50:40 | 2020-05-25 07:06:39 | 0:15:59 | 0:09:13 | 0:06:46 | smithi | master | centos | 8.1 | rados/singleton-nomsgr/{all/cache-fs-trunc.yaml rados.yaml supported-random-distro$/{centos_8.yaml}} | 1 | |
fail | 5079843 | 2020-05-21 23:19:09 | 2020-05-25 06:50:47 | 2020-05-25 07:22:46 | 0:31:59 | 0:24:22 | 0:07:37 | smithi | master | centos | 8.0 | rados/cephadm/with-work/{distro/centos_8.0.yaml fixed-2.yaml mode/packaged.yaml msgr/async-v2only.yaml start.yaml tasks/rados_python.yaml} | 2 | |
Failure Reason:
'/home/ubuntu/cephtest/archive/syslog/misc.log:2020-05-25T07:12:42.275309+00:00 smithi025 bash[25447]: debug 2020-05-25T07:12:42.273+0000 7f4315d5d700 -1 log_channel(cephadm) log [ERR] : cephadm exited with an error code: 1, stderr:INFO:cephadm:Deploy daemon prometheus.a ... ' in syslog |
||||||||||||||
pass | 5079844 | 2020-05-21 23:19:10 | 2020-05-25 06:52:55 | 2020-05-25 09:06:59 | 2:14:04 | 2:02:18 | 0:11:46 | smithi | master | centos | 8.1 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{more-async-recovery.yaml} backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/osd-delay.yaml msgr/async.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{centos_8.yaml} thrashers/mapgap.yaml thrashosds-health.yaml workloads/radosbench.yaml} | 2 | |
pass | 5079845 | 2020-05-21 23:19:11 | 2020-05-25 06:52:55 | 2020-05-25 07:24:55 | 0:32:00 | 0:12:06 | 0:19:54 | smithi | master | centos | 8.1 | rados/thrash-erasure-code-shec/{ceph.yaml clusters/{fixed-4.yaml openstack.yaml} msgr-failures/fastclose.yaml objectstore/bluestore-hybrid.yaml rados.yaml recovery-overrides/{more-async-recovery.yaml} supported-random-distro$/{centos_8.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/ec-rados-plugin=shec-k=4-m=3-c=2.yaml} | 4 | |
pass | 5079846 | 2020-05-21 23:19:12 | 2020-05-25 06:52:55 | 2020-05-25 07:12:55 | 0:20:00 | 0:11:13 | 0:08:47 | smithi | master | ubuntu | 18.04 | rados/perf/{ceph.yaml objectstore/bluestore-low-osd-mem-target.yaml openstack.yaml settings/optimized.yaml ubuntu_latest.yaml workloads/radosbench_4M_seq_read.yaml} | 1 | |
pass | 5079847 | 2020-05-21 23:19:13 | 2020-05-25 06:52:56 | 2020-05-25 07:16:55 | 0:23:59 | 0:08:27 | 0:15:32 | smithi | master | ubuntu | 18.04 | rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/few.yaml msgr/async.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} tasks/rados_striper.yaml} | 2 | |
fail | 5079848 | 2020-05-21 23:19:14 | 2020-05-25 06:54:24 | 2020-05-25 07:12:24 | 0:18:00 | 0:09:13 | 0:08:47 | smithi | master | ubuntu | 18.04 | rados/cephadm/workunits/{distro/ubuntu_18.04_podman.yaml task/test_cephadm.yaml} | 1 | |
Failure Reason:
Command failed (workunit test cephadm/test_cephadm.sh) on smithi172 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=617298c201f8bbb9c16ff0a43c13d6dd93f90f82 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_cephadm.sh' |
||||||||||||||
pass | 5079849 | 2020-05-21 23:19:15 | 2020-05-25 06:54:41 | 2020-05-25 07:38:41 | 0:44:00 | 0:33:24 | 0:10:36 | smithi | master | ubuntu | 18.04 | rados/standalone/{supported-random-distro$/{ubuntu_latest.yaml} workloads/erasure-code.yaml} | 1 | |
pass | 5079850 | 2020-05-21 23:19:16 | 2020-05-25 06:54:50 | 2020-05-25 07:30:50 | 0:36:00 | 0:28:48 | 0:07:12 | smithi | master | centos | 8.1 | rados/singleton/{all/lost-unfound.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-comp-zstd.yaml rados.yaml supported-random-distro$/{centos_8.yaml}} | 1 | |
pass | 5079851 | 2020-05-21 23:19:17 | 2020-05-25 06:54:58 | 2020-05-25 07:20:58 | 0:26:00 | 0:12:23 | 0:13:37 | smithi | master | centos | 8.1 | rados/cephadm/smoke/{distro/centos_latest.yaml fixed-2.yaml start.yaml} | 2 | |
pass | 5079852 | 2020-05-21 23:19:18 | 2020-05-25 06:56:56 | 2020-05-25 07:22:56 | 0:26:00 | 0:14:43 | 0:11:17 | smithi | master | rhel | 8.1 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{more-async-partial-recovery.yaml} backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/on.yaml msgr-failures/fastclose.yaml msgr/async-v1only.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{rhel_8.yaml} thrashers/morepggrow.yaml thrashosds-health.yaml workloads/redirect.yaml} | 2 | |
pass | 5079853 | 2020-05-21 23:19:19 | 2020-05-25 06:56:57 | 2020-05-25 07:22:56 | 0:25:59 | 0:07:45 | 0:18:14 | smithi | master | ubuntu | 18.04 | rados/multimon/{clusters/6.yaml msgr-failures/few.yaml msgr/async-v2only.yaml no_pools.yaml objectstore/bluestore-comp-zstd.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} tasks/mon_clock_with_skews.yaml} | 2 | |
fail | 5079854 | 2020-05-21 23:19:20 | 2020-05-25 06:56:56 | 2020-05-25 07:34:57 | 0:38:01 | 0:26:46 | 0:11:15 | smithi | master | centos | 8.1 | rados/cephadm/smoke-roleless/{distro/centos_latest.yaml start.yaml} | 2 | |
Failure Reason:
timeout expired in wait_until_healthy |
||||||||||||||
pass | 5079855 | 2020-05-21 23:19:21 | 2020-05-25 06:56:56 | 2020-05-25 08:08:58 | 1:12:02 | 0:26:11 | 0:45:51 | smithi | master | ubuntu | 18.04 | rados/thrash-erasure-code-big/{ceph.yaml cluster/{12-osds.yaml openstack.yaml} msgr-failures/few.yaml objectstore/bluestore-comp-zstd.yaml rados.yaml recovery-overrides/{default.yaml} supported-random-distro$/{ubuntu_latest.yaml} thrashers/mapgap.yaml thrashosds-health.yaml workloads/ec-rados-plugin=jerasure-k=4-m=2.yaml} | 3 | |
pass | 5079856 | 2020-05-21 23:19:22 | 2020-05-25 06:58:30 | 2020-05-25 07:24:29 | 0:25:59 | 0:18:46 | 0:07:13 | smithi | master | centos | 8.1 | rados/verify/{centos_latest.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-thrash/default/{default.yaml thrashosds-health.yaml} msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-comp-zstd.yaml rados.yaml tasks/rados_cls_all.yaml validater/lockdep.yaml} | 2 | |
pass | 5079857 | 2020-05-21 23:19:23 | 2020-05-25 06:58:30 | 2020-05-25 07:36:30 | 0:38:00 | 0:26:49 | 0:11:11 | smithi | master | ubuntu | 18.04 | rados/thrash-erasure-code-isa/{arch/x86_64.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/few.yaml objectstore/bluestore-comp-zstd.yaml rados.yaml recovery-overrides/{more-partial-recovery.yaml} supported-random-distro$/{ubuntu_latest.yaml} thrashers/morepggrow.yaml thrashosds-health.yaml workloads/ec-rados-plugin=isa-k=2-m=1.yaml} | 2 | |
pass | 5079858 | 2020-05-21 23:19:24 | 2020-05-25 06:58:34 | 2020-05-25 07:12:33 | 0:13:59 | 0:07:25 | 0:06:34 | smithi | master | centos | 8.1 | rados/singleton/{all/max-pg-per-osd.from-mon.yaml msgr-failures/many.yaml msgr/async.yaml objectstore/bluestore-hybrid.yaml rados.yaml supported-random-distro$/{centos_8.yaml}} | 1 | |
pass | 5079859 | 2020-05-21 23:19:25 | 2020-05-25 06:58:55 | 2020-05-25 07:30:55 | 0:32:00 | 0:24:27 | 0:07:33 | smithi | master | centos | 8.1 | rados/thrash-erasure-code/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/fast.yaml msgr-failures/few.yaml objectstore/bluestore-comp-zstd.yaml rados.yaml recovery-overrides/{more-partial-recovery.yaml} supported-random-distro$/{centos_8.yaml} thrashers/morepggrow.yaml thrashosds-health.yaml workloads/ec-small-objects-many-deletes.yaml} | 2 | |
pass | 5079860 | 2020-05-21 23:19:26 | 2020-05-25 06:58:55 | 2020-05-25 07:18:55 | 0:20:00 | 0:12:36 | 0:07:24 | smithi | master | rhel | 8.1 | rados/singleton-nomsgr/{all/ceph-kvstore-tool.yaml rados.yaml supported-random-distro$/{rhel_8.yaml}} | 1 | |
fail | 5079861 | 2020-05-21 23:19:27 | 2020-05-25 07:00:32 | 2020-05-25 07:40:31 | 0:39:59 | 0:32:41 | 0:07:18 | smithi | master | centos | 8.1 | rados/cephadm/with-work/{distro/centos_latest.yaml fixed-2.yaml mode/root.yaml msgr/async.yaml start.yaml tasks/rados_api_tests.yaml} | 2 | |
Failure Reason:
'/home/ubuntu/cephtest/archive/syslog/misc.log:2020-05-25T07:20:19.569143+00:00 smithi152 bash[25087]: debug 2020-05-25T07:20:19.567+0000 7f2e705c2700 -1 log_channel(cephadm) log [ERR] : cephadm exited with an error code: 1, stderr:INFO:cephadm:Deploy daemon prometheus.a ... ' in syslog |
||||||||||||||
pass | 5079862 | 2020-05-21 23:19:27 | 2020-05-25 07:12:40 | 2020-05-25 07:32:40 | 0:20:00 | 0:10:18 | 0:09:42 | smithi | master | ubuntu | 18.04 | rados/perf/{ceph.yaml objectstore/bluestore-stupid.yaml openstack.yaml settings/optimized.yaml ubuntu_latest.yaml workloads/radosbench_4M_write.yaml} | 1 | |
pass | 5079863 | 2020-05-21 23:19:28 | 2020-05-25 07:12:40 | 2020-05-25 07:34:40 | 0:22:00 | 0:14:50 | 0:07:10 | smithi | master | rhel | 8.1 | rados/mgr/{clusters/{2-node-mgr.yaml} debug/mgr.yaml objectstore/bluestore-hybrid.yaml supported-random-distro$/{rhel_8.yaml} tasks/insights.yaml} | 2 | |
pass | 5079864 | 2020-05-21 23:19:29 | 2020-05-25 07:12:40 | 2020-05-25 08:18:41 | 1:06:01 | 0:56:20 | 0:09:41 | smithi | master | rhel | 8.1 | rados/monthrash/{ceph.yaml clusters/3-mons.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-comp-zstd.yaml rados.yaml supported-random-distro$/{rhel_8.yaml} thrashers/one.yaml workloads/rados_mon_osdmap_prune.yaml} | 2 | |
pass | 5079865 | 2020-05-21 23:19:30 | 2020-05-25 07:12:41 | 2020-05-25 07:34:40 | 0:21:59 | 0:12:14 | 0:09:45 | smithi | master | centos | 8.1 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-comp-lz4.yaml rados.yaml supported-random-distro$/{centos_8.yaml} thrashers/none.yaml thrashosds-health.yaml workloads/redirect_promote_tests.yaml} | 2 | |
fail | 5079866 | 2020-05-21 23:19:32 | 2020-05-25 07:12:52 | 2020-05-25 07:34:52 | 0:22:00 | 0:07:31 | 0:14:29 | smithi | master | rhel | 7.7 | rados/cephadm/smoke/{distro/rhel_7.yaml fixed-2.yaml start.yaml} | 2 | |
Failure Reason:
Command failed on smithi181 with status 5: 'sudo systemctl stop ceph-fb4d4218-9e59-11ea-a06a-001a4aab830c@mon.a' |
||||||||||||||
pass | 5079867 | 2020-05-21 23:19:33 | 2020-05-25 07:12:54 | 2020-05-25 07:38:54 | 0:26:00 | 0:14:30 | 0:11:30 | smithi | master | rhel | 8.1 | rados/singleton/{all/max-pg-per-osd.from-primary.yaml msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{rhel_8.yaml}} | 1 | |
pass | 5079868 | 2020-05-21 23:19:34 | 2020-05-25 07:12:56 | 2020-05-25 09:27:00 | 2:14:04 | 1:41:13 | 0:32:51 | smithi | master | centos | 7.6 | rados/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-install/nautilus-v2only.yaml backoff/normal.yaml ceph.yaml clusters/{openstack.yaml three-plus-one.yaml} d-balancer/on.yaml distro$/{centos_7.6.yaml} msgr-failures/fastclose.yaml rados.yaml thrashers/morepggrow.yaml thrashosds-health.yaml workloads/radosbench.yaml} | 3 | |
fail | 5079869 | 2020-05-21 23:19:35 | 2020-05-25 07:13:03 | 2020-05-25 07:35:03 | 0:22:00 | 0:07:37 | 0:14:23 | smithi | master | rhel | 7.7 | rados/cephadm/smoke-roleless/{distro/rhel_7.yaml start.yaml} | 2 | |
Failure Reason:
Command failed on smithi083 with status 5: 'sudo systemctl stop ceph-1cc3bb52-9e5a-11ea-a06a-001a4aab830c@mon.smithi083' |
||||||||||||||
pass | 5079870 | 2020-05-21 23:19:36 | 2020-05-25 07:15:05 | 2020-05-25 08:07:06 | 0:52:01 | 0:35:17 | 0:16:44 | smithi | master | rhel | 8.1 | rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/many.yaml msgr/async-v1only.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{rhel_8.yaml} tasks/rados_workunit_loadgen_big.yaml} | 2 | |
pass | 5079871 | 2020-05-21 23:19:37 | 2020-05-25 07:15:05 | 2020-05-25 08:01:05 | 0:46:00 | 0:35:15 | 0:10:45 | smithi | master | rhel | 8.1 | rados/singleton-bluestore/{all/cephtool.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-comp-lz4.yaml rados.yaml supported-random-distro$/{rhel_8.yaml}} | 1 | |
pass | 5079872 | 2020-05-21 23:19:38 | 2020-05-25 07:15:05 | 2020-05-25 07:57:06 | 0:42:01 | 0:30:36 | 0:11:25 | smithi | master | rhel | 8.1 | rados/thrash-erasure-code-overwrites/{bluestore-bitmap.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/fast.yaml msgr-failures/few.yaml rados.yaml recovery-overrides/{more-active-recovery.yaml} supported-random-distro$/{rhel_8.yaml} thrashers/morepggrow.yaml thrashosds-health.yaml workloads/ec-snaps-few-objects-overwrites.yaml} | 2 | |
pass | 5079873 | 2020-05-21 23:19:39 | 2020-05-25 07:16:40 | 2020-05-25 08:28:41 | 1:12:01 | 0:11:25 | 1:00:36 | smithi | master | centos | 8.1 | rados/thrash-erasure-code-shec/{ceph.yaml clusters/{fixed-4.yaml openstack.yaml} msgr-failures/few.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml recovery-overrides/{default.yaml} supported-random-distro$/{centos_8.yaml} thrashers/careful.yaml thrashosds-health.yaml workloads/ec-rados-plugin=shec-k=4-m=3-c=2.yaml} | 4 | |
pass | 5079874 | 2020-05-21 23:19:40 | 2020-05-25 07:16:57 | 2020-05-25 08:40:58 | 1:24:01 | 1:01:20 | 0:22:41 | smithi | master | ubuntu | 18.04 | rados/dashboard/{clusters/{2-node-mgr.yaml} debug/mgr.yaml objectstore/bluestore-comp-snappy.yaml supported-random-distro$/{ubuntu_latest.yaml} tasks/dashboard.yaml} | 2 | |
pass | 5079875 | 2020-05-21 23:19:41 | 2020-05-25 07:19:12 | 2020-05-25 07:37:12 | 0:18:00 | 0:07:56 | 0:10:04 | smithi | master | ubuntu | 18.04 | rados/objectstore/{backends/filejournal.yaml supported-random-distro$/{ubuntu_latest.yaml}} | 1 | |
pass | 5079876 | 2020-05-21 23:19:42 | 2020-05-25 07:20:56 | 2020-05-25 07:46:56 | 0:26:00 | 0:18:20 | 0:07:40 | smithi | master | rhel | 8.1 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/on.yaml msgr-failures/osd-delay.yaml msgr/async.yaml objectstore/bluestore-comp-snappy.yaml rados.yaml supported-random-distro$/{rhel_8.yaml} thrashers/pggrow.yaml thrashosds-health.yaml workloads/redirect_set_object.yaml} | 2 | |
pass | 5079877 | 2020-05-21 23:19:42 | 2020-05-25 07:20:56 | 2020-05-25 07:34:56 | 0:14:00 | 0:06:33 | 0:07:27 | smithi | master | centos | 8.1 | rados/cephadm/workunits/{distro/centos_latest.yaml task/test_cephadm_repos.yaml} | 1 | |
pass | 5079878 | 2020-05-21 23:19:44 | 2020-05-25 07:20:59 | 2020-05-25 07:38:59 | 0:18:00 | 0:11:44 | 0:06:16 | smithi | master | centos | 8.1 | rados/singleton/{all/max-pg-per-osd.from-replica.yaml msgr-failures/many.yaml msgr/async-v2only.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{centos_8.yaml}} | 1 | |
pass | 5079879 | 2020-05-21 23:19:44 | 2020-05-25 07:23:06 | 2020-05-25 07:37:05 | 0:13:59 | 0:09:00 | 0:04:59 | smithi | master | centos | 8.1 | rados/singleton-nomsgr/{all/ceph-post-file.yaml rados.yaml supported-random-distro$/{centos_8.yaml}} | 1 | |
pass | 5079880 | 2020-05-21 23:19:45 | 2020-05-25 07:23:07 | 2020-05-25 07:53:06 | 0:29:59 | 0:21:38 | 0:08:21 | smithi | master | ubuntu | 18.04 | rados/perf/{ceph.yaml objectstore/bluestore-basic-min-osd-mem-target.yaml openstack.yaml settings/optimized.yaml ubuntu_latest.yaml workloads/radosbench_omap_write.yaml} | 1 | |
fail | 5079881 | 2020-05-21 23:19:46 | 2020-05-25 07:23:06 | 2020-05-25 07:41:06 | 0:18:00 | 0:10:52 | 0:07:08 | smithi | master | rhel | 8.0 | rados/cephadm/with-work/{distro/rhel_8.0.yaml fixed-2.yaml mode/packaged.yaml msgr/async-v1only.yaml start.yaml tasks/rados_python.yaml} | 2 | |
Failure Reason:
Command failed on smithi098 with status 5: 'sudo systemctl stop ceph-d45ff7d0-9e5a-11ea-a06a-001a4aab830c@mon.a' |
||||||||||||||
pass | 5079882 | 2020-05-21 23:19:47 | 2020-05-25 07:24:46 | 2020-05-25 07:48:46 | 0:24:00 | 0:12:48 | 0:11:12 | smithi | master | rhel | 8.1 | rados/singleton/{all/mon-auth-caps.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{rhel_8.yaml}} | 1 | |
fail | 5079883 | 2020-05-21 23:19:48 | 2020-05-25 07:24:50 | 2020-05-25 07:40:50 | 0:16:00 | 0:06:25 | 0:09:35 | smithi | master | rhel | 8.0 | rados/cephadm/smoke/{distro/rhel_8.0.yaml fixed-2.yaml start.yaml} | 2 | |
Failure Reason:
Command failed on smithi173 with status 5: 'sudo systemctl stop ceph-e8605f5e-9e5a-11ea-a06a-001a4aab830c@mon.a' |
||||||||||||||
pass | 5079884 | 2020-05-21 23:19:49 | 2020-05-25 07:24:57 | 2020-05-25 08:04:57 | 0:40:00 | 0:18:30 | 0:21:30 | smithi | master | rhel | 8.1 | rados/multimon/{clusters/9.yaml msgr-failures/many.yaml msgr/async.yaml no_pools.yaml objectstore/bluestore-hybrid.yaml rados.yaml supported-random-distro$/{rhel_8.yaml} tasks/mon_recovery.yaml} | 3 | |
pass | 5079885 | 2020-05-21 23:19:50 | 2020-05-25 07:24:59 | 2020-05-25 07:50:59 | 0:26:00 | 0:13:38 | 0:12:22 | smithi | master | rhel | 8.1 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{more-partial-recovery.yaml} backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/fastclose.yaml msgr/async-v1only.yaml objectstore/bluestore-comp-zlib.yaml rados.yaml supported-random-distro$/{rhel_8.yaml} thrashers/careful.yaml thrashosds-health.yaml workloads/set-chunk-promote-flush.yaml} | 2 | |
pass | 5079886 | 2020-05-21 23:19:51 | 2020-05-25 07:30:27 | 2020-05-25 07:52:27 | 0:22:00 | 0:14:01 | 0:07:59 | smithi | master | rhel | 8.1 | rados/thrash-erasure-code-big/{ceph.yaml cluster/{12-osds.yaml openstack.yaml} msgr-failures/osd-delay.yaml objectstore/bluestore-hybrid.yaml rados.yaml recovery-overrides/{more-async-recovery.yaml} supported-random-distro$/{rhel_8.yaml} thrashers/morepggrow.yaml thrashosds-health.yaml workloads/ec-rados-plugin=lrc-k=4-m=2-l=3.yaml} | 3 | |
pass | 5079887 | 2020-05-21 23:19:52 | 2020-05-25 07:30:40 | 2020-05-25 08:02:40 | 0:32:00 | 0:26:09 | 0:05:51 | smithi | master | centos | 8.1 | rados/verify/{centos_latest.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-thrash/none.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore-hybrid.yaml rados.yaml tasks/mon_recovery.yaml validater/valgrind.yaml} | 2 | |
pass | 5079888 | 2020-05-21 23:19:53 | 2020-05-25 07:30:42 | 2020-05-25 08:04:43 | 0:34:01 | 0:24:50 | 0:09:11 | smithi | master | rhel | 8.1 | rados/thrash-erasure-code-isa/{arch/x86_64.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/osd-delay.yaml objectstore/bluestore-hybrid.yaml rados.yaml recovery-overrides/{more-partial-recovery.yaml} supported-random-distro$/{rhel_8.yaml} thrashers/none.yaml thrashosds-health.yaml workloads/ec-rados-plugin=isa-k=2-m=1.yaml} | 2 | |
fail | 5079889 | 2020-05-21 23:19:54 | 2020-05-25 07:30:43 | 2020-05-25 08:16:43 | 0:46:00 | 0:36:01 | 0:09:59 | smithi | master | rhel | 8.0 | rados/cephadm/smoke-roleless/{distro/rhel_8.0.yaml start.yaml} | 2 | |
Failure Reason:
timeout expired in wait_until_healthy |
||||||||||||||
pass | 5079890 | 2020-05-21 23:19:54 | 2020-05-25 07:30:51 | 2020-05-25 08:00:51 | 0:30:00 | 0:21:08 | 0:08:52 | smithi | master | centos | 8.1 | rados/thrash-erasure-code/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/normal.yaml msgr-failures/osd-delay.yaml objectstore/bluestore-hybrid.yaml rados.yaml recovery-overrides/{more-async-partial-recovery.yaml} supported-random-distro$/{centos_8.yaml} thrashers/pggrow.yaml thrashosds-health.yaml workloads/ec-small-objects.yaml} | 2 | |
pass | 5079891 | 2020-05-21 23:19:55 | 2020-05-25 07:30:57 | 2020-05-25 08:10:57 | 0:40:00 | 0:27:58 | 0:12:02 | smithi | master | ubuntu | 18.04 | rados/mgr/{clusters/{2-node-mgr.yaml} debug/mgr.yaml objectstore/bluestore-low-osd-mem-target.yaml supported-random-distro$/{ubuntu_latest.yaml} tasks/module_selftest.yaml} | 2 | |
pass | 5079892 | 2020-05-21 23:19:56 | 2020-05-25 07:32:50 | 2020-05-25 07:50:49 | 0:17:59 | 0:11:33 | 0:06:26 | smithi | master | centos | 8.1 | rados/singleton/{all/mon-config-key-caps.yaml msgr-failures/many.yaml msgr/async-v1only.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{centos_8.yaml}} | 1 | |
pass | 5079893 | 2020-05-21 23:19:57 | 2020-05-25 07:32:50 | 2020-05-25 07:50:49 | 0:17:59 | 0:12:11 | 0:05:48 | smithi | master | centos | 8.1 | rados/standalone/{supported-random-distro$/{centos_8.yaml} workloads/mgr.yaml} | 1 | |
fail | 5079894 | 2020-05-21 23:19:58 | 2020-05-25 07:32:50 | 2020-05-25 07:52:50 | 0:20:00 | 0:09:26 | 0:10:34 | smithi | master | ubuntu | 18.04 | rados/cephadm/workunits/{distro/ubuntu_18.04_podman.yaml task/test_adoption.yaml} | 1 | |
Failure Reason:
Command failed (workunit test cephadm/test_adoption.sh) on smithi194 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=617298c201f8bbb9c16ff0a43c13d6dd93f90f82 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_adoption.sh' |
||||||||||||||
pass | 5079895 | 2020-05-21 23:19:59 | 2020-05-25 07:32:59 | 2020-05-25 08:10:59 | 0:38:00 | 0:32:25 | 0:05:35 | smithi | master | rhel | 8.1 | rados/monthrash/{ceph.yaml clusters/9-mons.yaml msgr-failures/mon-delay.yaml msgr/async.yaml objectstore/bluestore-hybrid.yaml rados.yaml supported-random-distro$/{rhel_8.yaml} thrashers/sync-many.yaml workloads/rados_mon_workunits.yaml} | 2 | |
pass | 5079896 | 2020-05-21 23:20:00 | 2020-05-25 07:33:20 | 2020-05-25 07:55:20 | 0:22:00 | 0:12:37 | 0:09:23 | smithi | master | ubuntu | 18.04 | rados/perf/{ceph.yaml objectstore/bluestore-bitmap.yaml openstack.yaml settings/optimized.yaml ubuntu_latest.yaml workloads/sample_fio.yaml} | 1 | |
pass | 5079897 | 2020-05-21 23:20:01 | 2020-05-25 07:33:37 | 2020-05-25 07:51:37 | 0:18:00 | 0:07:49 | 0:10:11 | smithi | master | ubuntu | 18.04 | rados/singleton-nomsgr/{all/export-after-evict.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml}} | 1 | |
pass | 5079898 | 2020-05-21 23:20:02 | 2020-05-25 07:34:42 | 2020-05-25 08:00:42 | 0:26:00 | 0:19:05 | 0:06:55 | smithi | master | rhel | 8.1 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{more-async-recovery.yaml} backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/on.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-comp-zstd.yaml rados.yaml supported-random-distro$/{rhel_8.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/set-chunks-read.yaml} | 2 | |
pass | 5079899 | 2020-05-21 23:20:03 | 2020-05-25 07:34:42 | 2020-05-25 08:02:43 | 0:28:01 | 0:22:09 | 0:05:52 | smithi | master | centos | 8.1 | rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-comp-lz4.yaml rados.yaml supported-random-distro$/{centos_8.yaml} tasks/rados_workunit_loadgen_mix.yaml} | 2 | |
fail | 5079900 | 2020-05-21 23:20:04 | 2020-05-25 07:34:42 | 2020-05-25 07:46:42 | 0:12:00 | 0:04:36 | 0:07:24 | smithi | master | rhel | 8.1 | rados/cephadm/smoke/{distro/rhel_latest.yaml fixed-2.yaml start.yaml} | 2 | |
Failure Reason:
Command failed on smithi161 with status 5: 'sudo systemctl stop ceph-9a8eef38-9e5b-11ea-a06a-001a4aab830c@mon.a' |
||||||||||||||
pass | 5079901 | 2020-05-21 23:20:05 | 2020-05-25 07:34:42 | 2020-05-25 07:56:42 | 0:22:00 | 0:14:56 | 0:07:04 | smithi | master | centos | 8.1 | rados/singleton/{all/mon-config-keys.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-comp-lz4.yaml rados.yaml supported-random-distro$/{centos_8.yaml}} | 1 | |
pass | 5079902 | 2020-05-21 23:20:05 | 2020-05-25 07:34:42 | 2020-05-25 08:10:42 | 0:36:00 | 0:12:12 | 0:23:48 | smithi | master | ubuntu | 18.04 | rados/thrash-erasure-code-shec/{ceph.yaml clusters/{fixed-4.yaml openstack.yaml} msgr-failures/osd-delay.yaml objectstore/bluestore-stupid.yaml rados.yaml recovery-overrides/{more-async-partial-recovery.yaml} supported-random-distro$/{ubuntu_latest.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/ec-rados-plugin=shec-k=4-m=3-c=2.yaml} | 4 | |
pass | 5079903 | 2020-05-21 23:20:06 | 2020-05-25 07:34:53 | 2020-05-25 08:14:54 | 0:40:01 | 0:18:48 | 0:21:13 | smithi | master | centos | 7.6 | rados/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-install/nautilus.yaml backoff/peering.yaml ceph.yaml clusters/{openstack.yaml three-plus-one.yaml} d-balancer/crush-compat.yaml distro$/{centos_7.6.yaml} msgr-failures/few.yaml rados.yaml thrashers/none.yaml thrashosds-health.yaml workloads/rbd_cls.yaml} | 3 | |
fail | 5079904 | 2020-05-21 23:20:07 | 2020-05-25 07:34:57 | 2020-05-25 08:18:58 | 0:44:01 | 0:36:41 | 0:07:20 | smithi | master | rhel | 8.1 | rados/cephadm/smoke-roleless/{distro/rhel_latest.yaml start.yaml} | 2 | |
Failure Reason:
'/home/ubuntu/cephtest/archive/syslog/misc.log:2020-05-25T07:52:32.433599+00:00 smithi066 bash[22004]: debug 2020-05-25T07:52:32.431+0000 7fe0e6af7700 -1 log_channel(cephadm) log [ERR] : cephadm exited with an error code: 1, stderr:INFO:cephadm:Deploy daemon alertmanager.smithi066 ... ' in syslog |
||||||||||||||
pass | 5079905 | 2020-05-21 23:20:08 | 2020-05-25 07:34:57 | 2020-05-25 08:04:57 | 0:30:00 | 0:19:17 | 0:10:43 | smithi | master | centos | 8.1 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{more-partial-recovery.yaml} backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/osd-delay.yaml msgr/async.yaml objectstore/bluestore-hybrid.yaml rados.yaml supported-random-distro$/{centos_8.yaml} thrashers/mapgap.yaml thrashosds-health.yaml workloads/small-objects-balanced.yaml} | 2 | |
pass | 5079906 | 2020-05-21 23:20:09 | 2020-05-25 07:34:58 | 2020-05-25 08:50:59 | 1:16:01 | 1:01:15 | 0:14:46 | smithi | master | ubuntu | 18.04 | rados/dashboard/{clusters/{2-node-mgr.yaml} debug/mgr.yaml objectstore/bluestore-comp-zlib.yaml supported-random-distro$/{ubuntu_latest.yaml} tasks/dashboard.yaml} | 2 | |
pass | 5079907 | 2020-05-21 23:20:10 | 2020-05-25 07:35:05 | 2020-05-25 09:57:08 | 2:22:03 | 2:14:08 | 0:07:55 | smithi | master | ubuntu | 18.04 | rados/objectstore/{backends/filestore-idempotent-aio-journal.yaml supported-random-distro$/{ubuntu_latest.yaml}} | 1 | |
pass | 5079908 | 2020-05-21 23:20:11 | 2020-05-25 07:36:48 | 2020-05-25 08:00:47 | 0:23:59 | 0:18:35 | 0:05:24 | smithi | master | centos | 8.1 | rados/valgrind-leaks/{1-start.yaml 2-inject-leak/none.yaml centos_latest.yaml} | 1 | |
fail | 5079909 | 2020-05-21 23:20:12 | 2020-05-25 07:36:48 | 2020-05-25 08:20:48 | 0:44:00 | 0:35:23 | 0:08:37 | smithi | master | rhel | 8.1 | rados/cephadm/with-work/{distro/rhel_latest.yaml fixed-2.yaml mode/root.yaml msgr/async-v2only.yaml start.yaml tasks/rados_api_tests.yaml} | 2 | |
Failure Reason:
'/home/ubuntu/cephtest/archive/syslog/misc.log:2020-05-25T08:01:22.138109+00:00 smithi165 bash[25721]: debug 2020-05-25T08:01:22.136+0000 7f826df3c700 -1 log_channel(cephadm) log [ERR] : cephadm exited with an error code: 1, stderr:INFO:cephadm:Deploy daemon prometheus.a ... ' in syslog |
||||||||||||||
pass | 5079910 | 2020-05-21 23:20:13 | 2020-05-25 07:36:48 | 2020-05-25 07:54:47 | 0:17:59 | 0:10:39 | 0:07:20 | smithi | master | centos | 8.1 | rados/singleton/{all/mon-config.yaml msgr-failures/many.yaml msgr/async.yaml objectstore/bluestore-comp-snappy.yaml rados.yaml supported-random-distro$/{centos_8.yaml}} | 1 | |
pass | 5079911 | 2020-05-21 23:20:14 | 2020-05-25 07:36:51 | 2020-05-25 07:54:50 | 0:17:59 | 0:10:15 | 0:07:44 | smithi | master | ubuntu | 18.04 | rados/perf/{ceph.yaml objectstore/bluestore-comp.yaml openstack.yaml settings/optimized.yaml ubuntu_latest.yaml workloads/sample_radosbench.yaml} | 1 | |
fail | 5079912 | 2020-05-21 23:20:14 | 2020-05-25 07:36:56 | 2020-05-25 08:00:56 | 0:24:00 | 0:12:35 | 0:11:25 | smithi | master | ubuntu | 18.04 | rados/cephadm/smoke/{distro/ubuntu_18.04.yaml fixed-2.yaml start.yaml} | 2 | |
Failure Reason:
'/home/ubuntu/cephtest/archive/syslog/misc.log:2020-05-25T07:56:54.132040+00:00 smithi183 bash[10483]: debug 2020-05-25T07:56:54.127+0000 7f0d55ff1700 -1 log_channel(cephadm) log [ERR] : cephadm exited with an error code: 1, stderr:INFO:cephadm:Deploy daemon prometheus.a ... ' in syslog |
||||||||||||||
pass | 5079913 | 2020-05-21 23:20:15 | 2020-05-25 07:37:07 | 2020-05-25 07:53:06 | 0:15:59 | 0:10:55 | 0:05:04 | smithi | master | rhel | 8.1 | rados/singleton-nomsgr/{all/full-tiering.yaml rados.yaml supported-random-distro$/{rhel_8.yaml}} | 1 | |
pass | 5079914 | 2020-05-21 23:20:16 | 2020-05-25 07:37:13 | 2020-05-25 08:21:14 | 0:44:01 | 0:10:24 | 0:33:37 | smithi | master | rhel | 8.1 | rados/multimon/{clusters/21.yaml msgr-failures/few.yaml msgr/async-v1only.yaml no_pools.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{rhel_8.yaml} tasks/mon_clock_no_skews.yaml} | 3 | |
pass | 5079915 | 2020-05-21 23:20:17 | 2020-05-25 07:38:59 | 2020-05-25 08:31:00 | 0:52:01 | 0:36:01 | 0:16:00 | smithi | master | rhel | 8.1 | rados/thrash-erasure-code-overwrites/{bluestore-bitmap.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/normal.yaml msgr-failures/osd-delay.yaml rados.yaml recovery-overrides/{more-async-recovery.yaml} supported-random-distro$/{rhel_8.yaml} thrashers/pggrow.yaml thrashosds-health.yaml workloads/ec-pool-snaps-few-objects-overwrites.yaml} | 2 | |
fail | 5079916 | 2020-05-21 23:20:18 | 2020-05-25 07:38:59 | 2020-05-25 08:02:59 | 0:24:00 | 0:12:57 | 0:11:03 | smithi | master | ubuntu | 18.04 | rados/cephadm/smoke-roleless/{distro/ubuntu_18.04.yaml start.yaml} | 2 | |
Failure Reason:
'/home/ubuntu/cephtest/archive/syslog/misc.log:2020-05-25T07:54:03.230114+00:00 smithi134 bash[10357]: debug 2020-05-25T07:54:03.226+0000 7fae8550a700 -1 log_channel(cephadm) log [ERR] : cephadm exited with an error code: 1, stderr:INFO:cephadm:Deploy daemon alertmanager.smithi134 ... ' in syslog |
||||||||||||||
pass | 5079917 | 2020-05-21 23:20:19 | 2020-05-25 07:38:59 | 2020-05-25 08:20:59 | 0:42:00 | 0:21:54 | 0:20:06 | smithi | master | ubuntu | 18.04 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{more-async-recovery.yaml} backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/on.yaml msgr-failures/fastclose.yaml msgr/async-v1only.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} thrashers/morepggrow.yaml thrashosds-health.yaml workloads/small-objects-localized.yaml} | 2 | |
pass | 5079918 | 2020-05-21 23:20:20 | 2020-05-25 07:39:00 | 2020-05-25 08:19:00 | 0:40:00 | 0:26:18 | 0:13:42 | smithi | master | ubuntu | 18.04 | rados/thrash-erasure-code-big/{ceph.yaml cluster/{12-osds.yaml openstack.yaml} msgr-failures/fastclose.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml recovery-overrides/{more-active-recovery.yaml} supported-random-distro$/{ubuntu_latest.yaml} thrashers/pggrow.yaml thrashosds-health.yaml workloads/ec-rados-plugin=jerasure-k=4-m=2.yaml} | 3 | |
pass | 5079919 | 2020-05-21 23:20:21 | 2020-05-25 07:40:49 | 2020-05-25 08:16:49 | 0:36:00 | 0:22:42 | 0:13:18 | smithi | master | centos | 8.1 | rados/verify/{centos_latest.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-thrash/default/{default.yaml thrashosds-health.yaml} msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml tasks/rados_api_tests.yaml validater/lockdep.yaml} | 2 | |
pass | 5079920 | 2020-05-21 23:20:22 | 2020-05-25 07:40:49 | 2020-05-25 08:24:49 | 0:44:00 | 0:26:19 | 0:17:41 | smithi | master | ubuntu | 18.04 | rados/thrash-erasure-code-isa/{arch/x86_64.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/fastclose.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml recovery-overrides/{more-async-partial-recovery.yaml} supported-random-distro$/{ubuntu_latest.yaml} thrashers/pggrow.yaml thrashosds-health.yaml workloads/ec-rados-plugin=isa-k=2-m=1.yaml} | 2 | |
pass | 5079921 | 2020-05-21 23:20:23 | 2020-05-25 07:40:49 | 2020-05-25 08:04:49 | 0:24:00 | 0:18:45 | 0:05:15 | smithi | master | rhel | 8.1 | rados/singleton/{all/osd-backfill.yaml msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/bluestore-comp-zlib.yaml rados.yaml supported-random-distro$/{rhel_8.yaml}} | 1 | |
pass | 5079922 | 2020-05-21 23:20:23 | 2020-05-25 07:40:51 | 2020-05-25 08:26:51 | 0:46:00 | 0:27:30 | 0:18:30 | smithi | master | ubuntu | 18.04 | rados/thrash-erasure-code/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/fast.yaml msgr-failures/fastclose.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml recovery-overrides/{default.yaml} supported-random-distro$/{ubuntu_latest.yaml} thrashers/careful.yaml thrashosds-health.yaml workloads/ec-rados-plugin=clay-k=4-m=2.yaml} | 2 | |
fail | 5079923 | 2020-05-21 23:20:24 | 2020-05-25 07:40:51 | 2020-05-25 08:00:51 | 0:20:00 | 0:12:53 | 0:07:07 | smithi | master | centos | 8.1 | rados/cephadm/workunits/{distro/centos_latest.yaml task/test_cephadm.yaml} | 1 | |
Failure Reason:
Command failed (workunit test cephadm/test_cephadm.sh) on smithi098 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=617298c201f8bbb9c16ff0a43c13d6dd93f90f82 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_cephadm.sh' |
||||||||||||||
pass | 5079924 | 2020-05-21 23:20:25 | 2020-05-25 07:40:53 | 2020-05-25 08:08:53 | 0:28:00 | 0:14:13 | 0:13:47 | smithi | master | ubuntu | 18.04 | rados/mgr/{clusters/{2-node-mgr.yaml} debug/mgr.yaml objectstore/bluestore-stupid.yaml supported-random-distro$/{ubuntu_latest.yaml} tasks/progress.yaml} | 2 | |
pass | 5079925 | 2020-05-21 23:20:26 | 2020-05-25 07:41:07 | 2020-05-25 08:31:07 | 0:50:00 | 0:33:17 | 0:16:43 | smithi | master | ubuntu | 18.04 | rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/many.yaml msgr/async.yaml objectstore/bluestore-comp-snappy.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} tasks/rados_workunit_loadgen_mostlyread.yaml} | 2 | |
pass | 5079926 | 2020-05-21 23:20:27 | 2020-05-25 07:42:35 | 2020-05-25 08:30:35 | 0:48:00 | 0:25:26 | 0:22:34 | smithi | master | ubuntu | 18.04 | rados/monthrash/{ceph.yaml clusters/3-mons.yaml msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} thrashers/sync.yaml workloads/snaps-few-objects.yaml} | 2 | |
fail | 5079927 | 2020-05-21 23:20:28 | 2020-05-25 07:42:48 | 2020-05-25 08:22:49 | 0:40:01 | 0:23:35 | 0:16:26 | smithi | master | ubuntu | 18.04 | rados/cephadm/with-work/{distro/ubuntu_18.04.yaml fixed-2.yaml mode/packaged.yaml msgr/async.yaml start.yaml tasks/rados_python.yaml} | 2 | |
Failure Reason:
'/home/ubuntu/cephtest/archive/syslog/misc.log:2020-05-25T08:10:36.281627+00:00 smithi175 bash[13473]: debug 2020-05-25T08:10:36.274+0000 7f731f668700 -1 log_channel(cephadm) log [ERR] : cephadm exited with an error code: 1, stderr:INFO:cephadm:Deploy daemon prometheus.a ... ' in syslog |
||||||||||||||
pass | 5079928 | 2020-05-21 23:20:29 | 2020-05-25 07:46:49 | 2020-05-25 08:12:49 | 0:26:00 | 0:19:54 | 0:06:06 | smithi | master | rhel | 8.1 | rados/singleton/{all/osd-recovery-incomplete.yaml msgr-failures/many.yaml msgr/async-v2only.yaml objectstore/bluestore-comp-zstd.yaml rados.yaml supported-random-distro$/{rhel_8.yaml}} | 1 | |
pass | 5079929 | 2020-05-21 23:20:30 | 2020-05-25 07:46:49 | 2020-05-25 08:12:49 | 0:26:00 | 0:16:09 | 0:09:51 | smithi | master | ubuntu | 18.04 | rados/perf/{ceph.yaml objectstore/bluestore-comp.yaml openstack.yaml settings/optimized.yaml ubuntu_latest.yaml workloads/cosbench_64K_read_write.yaml} | 1 | |
pass | 5079930 | 2020-05-21 23:20:31 | 2020-05-25 07:46:52 | 2020-05-25 08:12:51 | 0:25:59 | 0:18:44 | 0:07:15 | smithi | master | centos | 8.1 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{more-async-recovery.yaml} backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{centos_8.yaml} thrashers/none.yaml thrashosds-health.yaml workloads/small-objects.yaml} | 2 | |
pass | 5079931 | 2020-05-21 23:20:32 | 2020-05-25 07:46:57 | 2020-05-25 09:16:59 | 1:30:02 | 0:13:58 | 1:16:04 | smithi | master | rhel | 8.1 | rados/thrash-erasure-code-shec/{ceph.yaml clusters/{fixed-4.yaml openstack.yaml} msgr-failures/fastclose.yaml objectstore/filestore-xfs.yaml rados.yaml recovery-overrides/{more-active-recovery.yaml} supported-random-distro$/{rhel_8.yaml} thrashers/careful.yaml thrashosds-health.yaml workloads/ec-rados-plugin=shec-k=4-m=3-c=2.yaml} | 4 | |
fail | 5079932 | 2020-05-21 23:20:33 | 2020-05-25 07:48:41 | 2020-05-25 08:14:40 | 0:25:59 | 0:12:57 | 0:13:02 | smithi | master | ubuntu | 18.04 | rados/cephadm/smoke/{distro/ubuntu_18.04_podman.yaml fixed-2.yaml start.yaml} | 2 | |
Failure Reason:
'/home/ubuntu/cephtest/archive/syslog/misc.log:2020-05-25T08:10:53.865239+00:00 smithi170 bash[14933]: debug 2020-05-25T08:10:53.861+0000 7f4d8b94b700 -1 log_channel(cephadm) log [ERR] : cephadm exited with an error code: 1, stderr:INFO:cephadm:Deploy daemon prometheus.a ... ' in syslog |
||||||||||||||
pass | 5079933 | 2020-05-21 23:20:34 | 2020-05-25 07:48:41 | 2020-05-25 08:08:41 | 0:20:00 | 0:13:47 | 0:06:13 | smithi | master | rhel | 8.1 | rados/singleton-nomsgr/{all/health-warnings.yaml rados.yaml supported-random-distro$/{rhel_8.yaml}} | 1 | |
pass | 5079934 | 2020-05-21 23:20:35 | 2020-05-25 07:48:47 | 2020-05-25 08:52:48 | 1:04:01 | 0:35:32 | 0:28:29 | smithi | master | centos | 7.6 | rados/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-install/octopus.yaml backoff/peering_and_degraded.yaml ceph.yaml clusters/{openstack.yaml three-plus-one.yaml} d-balancer/on.yaml distro$/{centos_7.6.yaml} msgr-failures/osd-delay.yaml rados.yaml thrashers/pggrow.yaml thrashosds-health.yaml workloads/snaps-few-objects.yaml} | 3 | |
fail | 5079935 | 2020-05-21 23:20:35 | 2020-05-25 07:49:02 | 2020-05-25 08:13:02 | 0:24:00 | 0:12:36 | 0:11:24 | smithi | master | ubuntu | 18.04 | rados/cephadm/smoke-roleless/{distro/ubuntu_18.04_podman.yaml start.yaml} | 2 | |
Failure Reason:
'/home/ubuntu/cephtest/archive/syslog/misc.log:2020-05-25T08:05:25.053928+00:00 smithi006 bash[13313]: debug 2020-05-25T08:05:25.049+0000 7f2508fd4700 -1 log_channel(cephadm) log [ERR] : cephadm exited with an error code: 1, stderr:INFO:cephadm:Deploy daemon alertmanager.smithi006 ... ' in syslog |
||||||||||||||
pass | 5079936 | 2020-05-21 23:20:36 | 2020-05-25 07:49:02 | 2020-05-25 08:11:02 | 0:22:00 | 0:15:59 | 0:06:01 | smithi | master | rhel | 8.1 | rados/singleton/{all/osd-recovery.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore-hybrid.yaml rados.yaml supported-random-distro$/{rhel_8.yaml}} | 1 | |
pass | 5079937 | 2020-05-21 23:20:37 | 2020-05-25 07:51:07 | 2020-05-25 08:25:07 | 0:34:00 | 0:28:02 | 0:05:58 | smithi | master | rhel | 8.1 | rados/standalone/{supported-random-distro$/{rhel_8.yaml} workloads/misc.yaml} | 1 | |
pass | 5079938 | 2020-05-21 23:20:38 | 2020-05-25 07:51:07 | 2020-05-25 08:31:07 | 0:40:00 | 0:26:11 | 0:13:49 | smithi | master | centos | 8.1 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{more-partial-recovery.yaml} backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/on.yaml msgr-failures/osd-delay.yaml msgr/async.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{centos_8.yaml} thrashers/pggrow.yaml thrashosds-health.yaml workloads/snaps-few-objects-balanced.yaml} | 2 | |
pass | 5079939 | 2020-05-21 23:20:39 | 2020-05-25 07:51:07 | 2020-05-25 08:07:07 | 0:16:00 | 0:06:14 | 0:09:46 | smithi | master | ubuntu | 18.04 | rados/cephadm/workunits/{distro/ubuntu_18.04_podman.yaml task/test_cephadm_repos.yaml} | 1 | |
pass | 5079940 | 2020-05-21 23:20:40 | 2020-05-25 07:51:07 | 2020-05-25 08:33:07 | 0:42:00 | 0:33:27 | 0:08:33 | smithi | master | ubuntu | 18.04 | rados/singleton-bluestore/{all/cephtool.yaml msgr-failures/many.yaml msgr/async.yaml objectstore/bluestore-comp-snappy.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml}} | 1 | |
pass | 5079941 | 2020-05-21 23:20:41 | 2020-05-25 07:51:14 | 2020-05-25 09:01:15 | 1:10:01 | 0:59:57 | 0:10:04 | smithi | master | centos | 8.1 | rados/dashboard/{clusters/{2-node-mgr.yaml} debug/mgr.yaml objectstore/bluestore-comp-zstd.yaml supported-random-distro$/{centos_8.yaml} tasks/dashboard.yaml} | 2 | |
pass | 5079942 | 2020-05-21 23:20:41 | 2020-05-25 07:51:38 | 2020-05-25 10:11:41 | 2:20:03 | 2:13:20 | 0:06:43 | smithi | master | centos | 8.1 | rados/objectstore/{backends/filestore-idempotent.yaml supported-random-distro$/{centos_8.yaml}} | 1 | |
pass | 5079943 | 2020-05-21 23:20:42 | 2020-05-25 07:52:44 | 2020-05-25 08:16:43 | 0:23:59 | 0:16:17 | 0:07:42 | smithi | master | ubuntu | 18.04 | rados/perf/{ceph.yaml objectstore/bluestore-low-osd-mem-target.yaml openstack.yaml settings/optimized.yaml ubuntu_latest.yaml workloads/cosbench_64K_write.yaml} | 1 | |
fail | 5079944 | 2020-05-21 23:20:43 | 2020-05-25 07:52:51 | 2020-05-25 08:36:51 | 0:44:00 | 0:32:46 | 0:11:14 | smithi | master | ubuntu | 18.04 | rados/cephadm/with-work/{distro/ubuntu_18.04_podman.yaml fixed-2.yaml mode/root.yaml msgr/async-v1only.yaml start.yaml tasks/rados_api_tests.yaml} | 2 | |
Failure Reason:
'/home/ubuntu/cephtest/archive/syslog/misc.log:2020-05-25T08:16:08.150739+00:00 smithi092 bash[18096]: debug 2020-05-25T08:16:08.142+0000 7f514d99b700 -1 log_channel(cephadm) log [ERR] : cephadm exited with an error code: 1, stderr:INFO:cephadm:Deploy daemon prometheus.a ... ' in syslog |
||||||||||||||
pass | 5079945 | 2020-05-21 23:20:44 | 2020-05-25 07:52:53 | 2020-05-25 08:10:52 | 0:17:59 | 0:07:55 | 0:10:04 | smithi | master | ubuntu | 18.04 | rados/singleton/{all/peer.yaml msgr-failures/many.yaml msgr/async-v1only.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml}} | 1 | |
fail | 5079946 | 2020-05-21 23:20:45 | 2020-05-25 07:53:07 | 2020-05-25 08:11:07 | 0:18:00 | 0:07:26 | 0:10:34 | smithi | master | ubuntu | 18.04 | rados/multimon/{clusters/3.yaml msgr-failures/many.yaml msgr/async-v2only.yaml no_pools.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} tasks/mon_clock_with_skews.yaml} | 2 | |
Failure Reason:
expected MON_CLOCK_SKEW but got none |
||||||||||||||
pass | 5079947 | 2020-05-21 23:20:46 | 2020-05-25 07:53:07 | 2020-05-25 08:23:07 | 0:30:00 | 0:15:11 | 0:14:49 | smithi | master | rhel | 8.1 | rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/bluestore-comp-zlib.yaml rados.yaml supported-random-distro$/{rhel_8.yaml} tasks/readwrite.yaml} | 2 | |
pass | 5079948 | 2020-05-21 23:20:47 | 2020-05-25 07:55:04 | 2020-05-25 08:19:04 | 0:24:00 | 0:13:36 | 0:10:24 | smithi | master | rhel | 8.1 | rados/thrash-erasure-code-big/{ceph.yaml cluster/{12-osds.yaml openstack.yaml} msgr-failures/few.yaml objectstore/bluestore-stupid.yaml rados.yaml recovery-overrides/{more-active-recovery.yaml} supported-random-distro$/{rhel_8.yaml} thrashers/careful.yaml thrashosds-health.yaml workloads/ec-rados-plugin=lrc-k=4-m=2-l=3.yaml} | 3 | |
fail | 5079949 | 2020-05-21 23:20:48 | 2020-05-25 07:55:04 | 2020-05-25 10:17:08 | 2:22:04 | 2:13:56 | 0:08:08 | smithi | master | centos | 8.1 | rados/verify/{centos_latest.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-thrash/none.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-stupid.yaml rados.yaml tasks/rados_cls_all.yaml validater/valgrind.yaml} | 2 | |
Failure Reason:
saw valgrind issues |
||||||||||||||
pass | 5079950 | 2020-05-21 23:20:49 | 2020-05-25 07:55:21 | 2020-05-25 08:37:21 | 0:42:00 | 0:26:39 | 0:15:21 | smithi | master | ubuntu | 18.04 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/fastclose.yaml msgr/async-v1only.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} thrashers/careful.yaml thrashosds-health.yaml workloads/snaps-few-objects-localized.yaml} | 2 | |
pass | 5079951 | 2020-05-21 23:20:49 | 2020-05-25 07:56:42 | 2020-05-25 08:32:42 | 0:36:00 | 0:25:07 | 0:10:53 | smithi | master | centos | 8.1 | rados/thrash-erasure-code-isa/{arch/x86_64.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/few.yaml objectstore/bluestore-stupid.yaml rados.yaml recovery-overrides/{more-async-partial-recovery.yaml} supported-random-distro$/{centos_8.yaml} thrashers/careful.yaml thrashosds-health.yaml workloads/ec-rados-plugin=isa-k=2-m=1.yaml} | 2 | |
fail | 5079952 | 2020-05-21 23:20:50 | 2020-05-25 07:56:43 | 2020-05-25 08:22:43 | 0:26:00 | 0:12:52 | 0:13:08 | smithi | master | ubuntu | 18.04 | rados/cephadm/smoke/{distro/ubuntu_latest.yaml fixed-2.yaml start.yaml} | 2 | |
Failure Reason:
'/home/ubuntu/cephtest/archive/syslog/misc.log:2020-05-25T08:17:45.125147+00:00 smithi196 bash[10347]: debug 2020-05-25T08:17:45.120+0000 7f5053606700 -1 log_channel(cephadm) log [ERR] : cephadm exited with an error code: 1, stderr:INFO:cephadm:Deploy daemon prometheus.a ... ' in syslog |
||||||||||||||
pass | 5079953 | 2020-05-21 23:20:51 | 2020-05-25 07:56:51 | 2020-05-25 08:14:51 | 0:18:00 | 0:09:57 | 0:08:03 | smithi | master | centos | 8.1 | rados/singleton-nomsgr/{all/large-omap-object-warnings.yaml rados.yaml supported-random-distro$/{centos_8.yaml}} | 1 | |
pass | 5079954 | 2020-05-21 23:20:52 | 2020-05-25 07:57:07 | 2020-05-25 08:37:07 | 0:40:00 | 0:30:11 | 0:09:49 | smithi | master | rhel | 8.1 | rados/thrash-erasure-code/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/normal.yaml msgr-failures/few.yaml objectstore/bluestore-stupid.yaml rados.yaml recovery-overrides/{more-async-partial-recovery.yaml} supported-random-distro$/{rhel_8.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/ec-rados-plugin=jerasure-k=2-m=1.yaml} | 2 | |
pass | 5079955 | 2020-05-21 23:20:53 | 2020-05-25 07:58:52 | 2020-05-25 08:24:52 | 0:26:00 | 0:10:54 | 0:15:06 | smithi | master | ubuntu | 18.04 | rados/mgr/{clusters/{2-node-mgr.yaml} debug/mgr.yaml objectstore/filestore-xfs.yaml supported-random-distro$/{ubuntu_latest.yaml} tasks/prometheus.yaml} | 2 | |
pass | 5079956 | 2020-05-21 23:20:54 | 2020-05-25 08:00:27 | 2020-05-25 08:20:26 | 0:19:59 | 0:14:26 | 0:05:33 | smithi | master | rhel | 8.1 | rados/singleton/{all/pg-autoscaler.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{rhel_8.yaml}} | 2 | |
fail | 5079957 | 2020-05-21 23:20:55 | 2020-05-25 08:11:14 | 2020-05-25 08:35:14 | 0:24:00 | 0:12:53 | 0:11:07 | smithi | master | ubuntu | 18.04 | rados/cephadm/smoke-roleless/{distro/ubuntu_latest.yaml start.yaml} | 2 | |
Failure Reason:
'/home/ubuntu/cephtest/archive/syslog/misc.log:2020-05-25T08:27:06.420535+00:00 smithi188 bash[10549]: debug 2020-05-25T08:27:06.416+0000 7f26b62f8700 -1 log_channel(cephadm) log [ERR] : cephadm exited with an error code: 1, stderr:INFO:cephadm:Deploy daemon alertmanager.smithi188 ... ' in syslog |
||||||||||||||
pass | 5079958 | 2020-05-21 23:20:56 | 2020-05-25 08:11:14 | 2020-05-25 08:57:15 | 0:46:01 | 0:18:07 | 0:27:54 | smithi | master | ubuntu | 18.04 | rados/monthrash/{ceph.yaml clusters/9-mons.yaml msgr-failures/mon-delay.yaml msgr/async-v2only.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} thrashers/force-sync-many.yaml workloads/pool-create-delete.yaml} | 2 | |
pass | 5079959 | 2020-05-21 23:20:57 | 2020-05-25 08:12:41 | 2020-05-25 08:50:41 | 0:38:00 | 0:25:47 | 0:12:13 | smithi | master | rhel | 8.1 | rados/thrash-erasure-code-overwrites/{bluestore-bitmap.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/fast.yaml msgr-failures/fastclose.yaml rados.yaml recovery-overrides/{default.yaml} supported-random-distro$/{rhel_8.yaml} thrashers/careful.yaml thrashosds-health.yaml workloads/ec-small-objects-fast-read-overwrites.yaml} | 2 | |
pass | 5079960 | 2020-05-21 23:20:58 | 2020-05-25 08:12:50 | 2020-05-25 09:14:51 | 1:02:01 | 0:11:48 | 0:50:13 | smithi | master | centos | 8.1 | rados/thrash-erasure-code-shec/{ceph.yaml clusters/{fixed-4.yaml openstack.yaml} msgr-failures/fastclose.yaml objectstore/bluestore-bitmap.yaml rados.yaml recovery-overrides/{more-partial-recovery.yaml} supported-random-distro$/{centos_8.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/ec-rados-plugin=shec-k=4-m=3-c=2.yaml} | 4 | |
fail | 5079961 | 2020-05-21 23:20:58 | 2020-05-25 08:12:50 | 2020-05-25 09:08:51 | 0:56:01 | 0:32:50 | 0:23:11 | smithi | master | centos | 8.0 | rados/cephadm/with-work/{distro/centos_8.0.yaml fixed-2.yaml mode/root.yaml msgr/async-v1only.yaml start.yaml tasks/rados_api_tests.yaml} | 2 | |
Failure Reason:
'/home/ubuntu/cephtest/archive/syslog/misc.log:2020-05-25T08:49:34.432728+00:00 smithi136 bash[25446]: debug 2020-05-25T08:49:34.431+0000 7fb056325700 -1 log_channel(cephadm) log [ERR] : cephadm exited with an error code: 1, stderr:INFO:cephadm:Deploy daemon prometheus.a ... ' in syslog |
||||||||||||||
pass | 5079962 | 2020-05-21 23:20:59 | 2020-05-25 08:12:53 | 2020-05-25 08:54:53 | 0:42:00 | 0:26:39 | 0:15:21 | smithi | master | ubuntu | 18.04 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{more-async-recovery.yaml} backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/on.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-comp-lz4.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/snaps-few-objects.yaml} | 2 | |
pass | 5079963 | 2020-05-21 23:21:00 | 2020-05-25 08:13:03 | 2020-05-25 08:35:03 | 0:22:00 | 0:12:33 | 0:09:27 | smithi | master | ubuntu | 18.04 | rados/perf/{ceph.yaml objectstore/bluestore-stupid.yaml openstack.yaml settings/optimized.yaml ubuntu_latest.yaml workloads/fio_4K_rand_read.yaml} | 1 | |
pass | 5079964 | 2020-05-21 23:21:01 | 2020-05-25 08:14:58 | 2020-05-25 08:30:57 | 0:15:59 | 0:07:48 | 0:08:11 | smithi | master | ubuntu | 18.04 | rados/singleton/{all/pg-removal-interruption.yaml msgr-failures/many.yaml msgr/async.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml}} | 1 | |
pass | 5079965 | 2020-05-21 23:21:02 | 2020-05-25 08:14:58 | 2020-05-25 08:40:57 | 0:25:59 | 0:12:14 | 0:13:45 | smithi | master | ubuntu | 18.04 | rados/cephadm/orchestrator_cli/{2-node-mgr.yaml orchestrator_cli.yaml supported-random-distro$/{ubuntu_latest.yaml}} | 2 | |
pass | 5079966 | 2020-05-21 23:21:03 | 2020-05-25 08:14:58 | 2020-05-25 09:00:58 | 0:46:00 | 0:20:49 | 0:25:11 | smithi | master | centos | 7.6 | rados/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-install/luminous-v1only.yaml backoff/normal.yaml ceph.yaml clusters/{openstack.yaml three-plus-one.yaml} d-balancer/crush-compat.yaml distro$/{centos_7.6.yaml} msgr-failures/fastclose.yaml rados.yaml thrashers/pggrow.yaml thrashosds-health.yaml workloads/test_rbd_api.yaml} | 3 | |
pass | 5079967 | 2020-05-21 23:21:04 | 2020-05-25 08:14:58 | 2020-05-25 08:32:57 | 0:17:59 | 0:11:49 | 0:06:10 | smithi | master | rhel | 8.1 | rados/singleton-nomsgr/{all/lazy_omap_stats_output.yaml rados.yaml supported-random-distro$/{rhel_8.yaml}} | 1 | |
fail | 5079968 | 2020-05-21 23:21:04 | 2020-05-25 08:17:00 | 2020-05-25 08:55:00 | 0:38:00 | 0:12:49 | 0:25:11 | smithi | master | centos | 7.6 | rados/cephadm/smoke/{distro/centos_7.yaml fixed-2.yaml start.yaml} | 2 | |
Failure Reason:
'/home/ubuntu/cephtest/archive/syslog/misc.log:2020-05-25T08:51:11.412131+00:00 smithi086 bash: debug 2020-05-25T08:51:11.410+0000 7f4889ba2700 -1 log_channel(cephadm) log [ERR] : cephadm exited with an error code: 1, stderr:INFO:cephadm:Deploy daemon prometheus.a ... ' in syslog |
||||||||||||||
pass | 5079969 | 2020-05-21 23:21:05 | 2020-05-25 08:17:00 | 2020-05-25 09:05:01 | 0:48:01 | 0:17:42 | 0:30:19 | smithi | master | rhel | 8.1 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{more-async-recovery.yaml} backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/osd-delay.yaml msgr/async.yaml objectstore/bluestore-comp-snappy.yaml rados.yaml supported-random-distro$/{rhel_8.yaml} thrashers/mapgap.yaml thrashosds-health.yaml workloads/write_fadvise_dontneed.yaml} | 2 | |
pass | 5079970 | 2020-05-21 23:21:06 | 2020-05-25 08:17:00 | 2020-05-25 08:37:00 | 0:20:00 | 0:15:04 | 0:04:56 | smithi | master | centos | 8.1 | rados/singleton/{all/radostool.yaml msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{centos_8.yaml}} | 1 | |
fail | 5079971 | 2020-05-21 23:21:07 | 2020-05-25 08:18:58 | 2020-05-25 08:42:58 | 0:24:00 | 0:13:11 | 0:10:49 | smithi | master | centos | 7.6 | rados/cephadm/smoke-roleless/{distro/centos_7.yaml start.yaml} | 2 | |
Failure Reason:
'/home/ubuntu/cephtest/archive/syslog/misc.log:2020-05-25T08:36:31.056680+00:00 smithi165 bash: debug 2020-05-25T08:36:31.055+0000 7f4ccbbbd700 -1 log_channel(cephadm) log [ERR] : cephadm exited with an error code: 1, stderr:INFO:cephadm:Deploy daemon alertmanager.smithi165 ... ' in syslog |
||||||||||||||
pass | 5079972 | 2020-05-21 23:21:08 | 2020-05-25 08:18:59 | 2020-05-25 08:44:59 | 0:26:00 | 0:18:52 | 0:07:08 | smithi | master | rhel | 8.1 | rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/many.yaml msgr/async-v2only.yaml objectstore/bluestore-comp-zstd.yaml rados.yaml supported-random-distro$/{rhel_8.yaml} tasks/repair_test.yaml} | 2 | |
pass | 5079973 | 2020-05-21 23:21:09 | 2020-05-25 08:19:01 | 2020-05-25 08:45:01 | 0:26:00 | 0:12:11 | 0:13:49 | smithi | master | ubuntu | 18.04 | rados/multimon/{clusters/6.yaml msgr-failures/few.yaml msgr/async.yaml no_pools.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} tasks/mon_recovery.yaml} | 2 | |
pass | 5079974 | 2020-05-21 23:21:10 | 2020-05-25 08:19:05 | 2020-05-25 09:31:06 | 1:12:01 | 1:01:44 | 0:10:17 | smithi | master | centos | 8.1 | rados/dashboard/{clusters/{2-node-mgr.yaml} debug/mgr.yaml objectstore/bluestore-hybrid.yaml supported-random-distro$/{centos_8.yaml} tasks/dashboard.yaml} | 2 | |
pass | 5079975 | 2020-05-21 23:21:10 | 2020-05-25 08:20:43 | 2020-05-25 08:38:43 | 0:18:00 | 0:11:37 | 0:06:23 | smithi | master | rhel | 8.1 | rados/objectstore/{backends/fusestore.yaml supported-random-distro$/{rhel_8.yaml}} | 1 | |
pass | 5079976 | 2020-05-21 23:21:11 | 2020-05-25 08:20:50 | 2020-05-25 08:42:49 | 0:21:59 | 0:12:43 | 0:09:16 | smithi | master | ubuntu | 18.04 | rados/perf/{ceph.yaml objectstore/bluestore-basic-min-osd-mem-target.yaml openstack.yaml settings/optimized.yaml ubuntu_latest.yaml workloads/fio_4K_rand_rw.yaml} | 1 | |
fail | 5079977 | 2020-05-21 23:21:12 | 2020-05-25 08:21:01 | 2020-05-25 08:45:00 | 0:23:59 | 0:07:48 | 0:16:11 | smithi | master | centos | 8.1 | rados/cephadm/upgrade/{1-start.yaml 2-start-upgrade.yaml 3-wait.yaml distro$/{centos_latest.yaml} fixed-2.yaml} | 2 | |
Failure Reason:
Command failed on smithi025 with status 5: 'sudo systemctl stop ceph-4d2ae744-9e63-11ea-a06a-001a4aab830c@mon.a' |
||||||||||||||
pass | 5079978 | 2020-05-21 23:21:13 | 2020-05-25 08:21:03 | 2020-05-25 08:55:03 | 0:34:00 | 0:26:01 | 0:07:59 | smithi | master | centos | 8.1 | rados/thrash-erasure-code-big/{ceph.yaml cluster/{12-osds.yaml openstack.yaml} msgr-failures/osd-delay.yaml objectstore/filestore-xfs.yaml rados.yaml recovery-overrides/{more-active-recovery.yaml} supported-random-distro$/{centos_8.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/ec-rados-plugin=jerasure-k=4-m=2.yaml} | 3 | |
pass | 5079979 | 2020-05-21 23:21:14 | 2020-05-25 08:21:03 | 2020-05-25 08:41:03 | 0:20:00 | 0:11:47 | 0:08:13 | smithi | master | centos | 8.1 | rados/verify/{centos_latest.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-thrash/default/{default.yaml thrashosds-health.yaml} msgr-failures/few.yaml msgr/async.yaml objectstore/filestore-xfs.yaml rados.yaml tasks/mon_recovery.yaml validater/lockdep.yaml} | 2 | |
pass | 5079980 | 2020-05-21 23:21:15 | 2020-05-25 08:21:15 | 2020-05-25 08:55:15 | 0:34:00 | 0:27:01 | 0:06:59 | smithi | master | centos | 8.1 | rados/thrash-erasure-code-isa/{arch/x86_64.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/osd-delay.yaml objectstore/filestore-xfs.yaml rados.yaml recovery-overrides/{more-async-recovery.yaml} supported-random-distro$/{centos_8.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/ec-rados-plugin=isa-k=2-m=1.yaml} | 2 | |
pass | 5079981 | 2020-05-21 23:21:16 | 2020-05-25 08:22:51 | 2020-05-25 09:00:51 | 0:38:00 | 0:24:32 | 0:13:28 | smithi | master | rhel | 8.1 | rados/singleton/{all/random-eio.yaml msgr-failures/many.yaml msgr/async-v2only.yaml objectstore/bluestore-comp-lz4.yaml rados.yaml supported-random-distro$/{rhel_8.yaml}} | 2 | |
pass | 5079982 | 2020-05-21 23:21:17 | 2020-05-25 08:22:51 | 2020-05-25 08:52:51 | 0:30:00 | 0:13:04 | 0:16:56 | smithi | master | rhel | 8.1 | rados/mgr/{clusters/{2-node-mgr.yaml} debug/mgr.yaml objectstore/bluestore-bitmap.yaml supported-random-distro$/{rhel_8.yaml} tasks/workunits.yaml} | 2 | |
pass | 5079983 | 2020-05-21 23:21:18 | 2020-05-25 08:22:51 | 2020-05-25 09:18:52 | 0:56:01 | 0:48:53 | 0:07:08 | smithi | master | centos | 8.1 | rados/standalone/{supported-random-distro$/{centos_8.yaml} workloads/mon.yaml} | 1 | |
pass | 5079984 | 2020-05-21 23:21:18 | 2020-05-25 08:22:52 | 2020-05-25 09:04:52 | 0:42:00 | 0:25:27 | 0:16:33 | smithi | master | rhel | 8.1 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/on.yaml msgr-failures/fastclose.yaml msgr/async-v1only.yaml objectstore/bluestore-comp-zlib.yaml rados.yaml supported-random-distro$/{rhel_8.yaml} thrashers/morepggrow.yaml thrashosds-health.yaml workloads/admin_socket_objecter_requests.yaml} | 2 | |
pass | 5079985 | 2020-05-21 23:21:19 | 2020-05-25 08:23:09 | 2020-05-25 09:03:09 | 0:40:00 | 0:26:57 | 0:13:03 | smithi | master | centos | 8.1 | rados/thrash-erasure-code/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/fast.yaml msgr-failures/osd-delay.yaml objectstore/filestore-xfs.yaml rados.yaml recovery-overrides/{more-partial-recovery.yaml} supported-random-distro$/{centos_8.yaml} thrashers/fastread.yaml thrashosds-health.yaml workloads/ec-rados-plugin=jerasure-k=3-m=1.yaml} | 2 | |
fail | 5079986 | 2020-05-21 23:21:20 | 2020-05-25 08:25:07 | 2020-05-25 08:47:06 | 0:21:59 | 0:13:12 | 0:08:47 | smithi | master | centos | 8.1 | rados/cephadm/workunits/{distro/centos_latest.yaml task/test_adoption.yaml} | 1 | |
Failure Reason:
Command failed (workunit test cephadm/test_adoption.sh) on smithi018 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=617298c201f8bbb9c16ff0a43c13d6dd93f90f82 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_adoption.sh' |
||||||||||||||
pass | 5079987 | 2020-05-21 23:21:21 | 2020-05-25 08:25:07 | 2020-05-25 08:47:06 | 0:21:59 | 0:13:46 | 0:08:13 | smithi | master | rhel | 8.1 | rados/singleton-nomsgr/{all/librados_hello_world.yaml rados.yaml supported-random-distro$/{rhel_8.yaml}} | 1 | |
pass | 5079988 | 2020-05-21 23:21:22 | 2020-05-25 08:25:08 | 2020-05-25 08:43:08 | 0:18:00 | 0:11:18 | 0:06:42 | smithi | master | rhel | 8.1 | rados/monthrash/{ceph.yaml clusters/3-mons.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{rhel_8.yaml} thrashers/many.yaml workloads/rados_5925.yaml} | 2 | |
pass | 5079989 | 2020-05-21 23:21:23 | 2020-05-25 08:27:08 | 2020-05-25 08:47:08 | 0:20:00 | 0:12:29 | 0:07:31 | smithi | master | centos | 8.0 | rados/cephadm/smoke/{distro/centos_8.0.yaml fixed-2.yaml start.yaml} | 2 | |
pass | 5079990 | 2020-05-21 23:21:24 | 2020-05-25 08:29:02 | 2020-05-25 09:01:02 | 0:32:00 | 0:14:46 | 0:17:14 | smithi | master | rhel | 8.1 | rados/thrash-erasure-code-shec/{ceph.yaml clusters/{fixed-4.yaml openstack.yaml} msgr-failures/few.yaml objectstore/bluestore-comp-lz4.yaml rados.yaml recovery-overrides/{more-async-recovery.yaml} supported-random-distro$/{rhel_8.yaml} thrashers/careful.yaml thrashosds-health.yaml workloads/ec-rados-plugin=shec-k=4-m=3-c=2.yaml} | 4 | |
fail | 5079991 | 2020-05-21 23:21:25 | 2020-05-25 08:29:15 | 2020-05-25 08:49:14 | 0:19:59 | 0:12:07 | 0:07:52 | smithi | master | centos | 8.1 | rados/singleton/{all/rebuild-mondb.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore-comp-snappy.yaml rados.yaml supported-random-distro$/{centos_8.yaml}} | 1 | |
Failure Reason:
Command failed on smithi120 with status 13: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph --log-early quorum_status' |
||||||||||||||
fail | 5079992 | 2020-05-21 23:21:26 | 2020-05-25 08:30:52 | 2020-05-25 09:02:52 | 0:32:00 | 0:20:14 | 0:11:46 | smithi | master | centos | 8.0 | rados/cephadm/smoke-roleless/{distro/centos_8.0.yaml start.yaml} | 2 | |
Failure Reason:
'/home/ubuntu/cephtest/archive/syslog/misc.log:2020-05-25T08:50:59.463058+00:00 smithi105 bash[22019]: debug 2020-05-25T08:50:59.462+0000 7f72b63cd700 -1 log_channel(cephadm) log [ERR] : cephadm exited with an error code: 1, stderr:INFO:cephadm:Deploy daemon alertmanager.smithi105 ... ' in syslog |
||||||||||||||
pass | 5079993 | 2020-05-21 23:21:26 | 2020-05-25 08:30:52 | 2020-05-25 08:56:52 | 0:26:00 | 0:20:13 | 0:05:47 | smithi | master | rhel | 8.1 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-comp-zstd.yaml rados.yaml supported-random-distro$/{rhel_8.yaml} thrashers/none.yaml thrashosds-health.yaml workloads/cache-agent-big.yaml} | 2 | |
pass | 5079994 | 2020-05-21 23:21:27 | 2020-05-25 08:30:58 | 2020-05-25 08:50:58 | 0:20:00 | 0:10:44 | 0:09:16 | smithi | master | ubuntu | 18.04 | rados/perf/{ceph.yaml objectstore/bluestore-bitmap.yaml openstack.yaml settings/optimized.yaml ubuntu_latest.yaml workloads/fio_4M_rand_read.yaml} | 1 | |
fail | 5079995 | 2020-05-21 23:21:28 | 2020-05-25 08:31:01 | 2020-05-25 09:05:01 | 0:34:00 | 0:24:01 | 0:09:59 | smithi | master | centos | 8.1 | rados/cephadm/with-work/{distro/centos_latest.yaml fixed-2.yaml mode/packaged.yaml msgr/async-v2only.yaml start.yaml tasks/rados_python.yaml} | 2 | |
Failure Reason:
'/home/ubuntu/cephtest/archive/syslog/misc.log:2020-05-25T08:55:03.842324+00:00 smithi188 bash[25078]: debug 2020-05-25T08:55:03.841+0000 7f3743d49700 -1 log_channel(cephadm) log [ERR] : cephadm exited with an error code: 1, stderr:INFO:cephadm:Deploy daemon prometheus.a ... ' in syslog |
||||||||||||||
pass | 5079996 | 2020-05-21 23:21:29 | 2020-05-25 08:31:02 | 2020-05-25 08:57:01 | 0:25:59 | 0:19:54 | 0:06:05 | smithi | master | centos | 8.1 | rados/singleton/{all/recovery-preemption.yaml msgr-failures/many.yaml msgr/async-v1only.yaml objectstore/bluestore-comp-zlib.yaml rados.yaml supported-random-distro$/{centos_8.yaml}} | 1 | |
pass | 5079997 | 2020-05-21 23:21:30 | 2020-05-25 08:31:09 | 2020-05-25 08:49:08 | 0:17:59 | 0:10:48 | 0:07:11 | smithi | master | centos | 8.1 | rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore-hybrid.yaml rados.yaml supported-random-distro$/{centos_8.yaml} tasks/scrub_test.yaml} | 2 | |
pass | 5079998 | 2020-05-21 23:21:31 | 2020-05-25 08:31:09 | 2020-05-25 09:19:09 | 0:48:00 | 0:31:03 | 0:16:57 | smithi | master | centos | 7.6 | rados/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-install/luminous.yaml backoff/peering.yaml ceph.yaml clusters/{openstack.yaml three-plus-one.yaml} d-balancer/on.yaml distro$/{centos_7.6.yaml} msgr-failures/few.yaml rados.yaml thrashers/careful.yaml thrashosds-health.yaml workloads/cache-snaps.yaml} | 3 | |
fail | 5079999 | 2020-05-21 23:21:32 | 2020-05-25 08:32:59 | 2020-05-25 08:52:58 | 0:19:59 | 0:09:26 | 0:10:33 | smithi | master | ubuntu | 18.04 | rados/cephadm/workunits/{distro/ubuntu_18.04_podman.yaml task/test_cephadm.yaml} | 1 | |
Failure Reason:
Command failed (workunit test cephadm/test_cephadm.sh) on smithi167 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=617298c201f8bbb9c16ff0a43c13d6dd93f90f82 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_cephadm.sh' |
||||||||||||||
pass | 5080000 | 2020-05-21 23:21:33 | 2020-05-25 08:32:59 | 2020-05-25 09:04:59 | 0:32:00 | 0:21:55 | 0:10:05 | smithi | master | centos | 8.1 | rados/thrash-erasure-code-overwrites/{bluestore-bitmap.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/normal.yaml msgr-failures/few.yaml rados.yaml recovery-overrides/{more-async-partial-recovery.yaml} supported-random-distro$/{centos_8.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/ec-small-objects-overwrites.yaml} | 2 | |
pass | 5080001 | 2020-05-21 23:21:34 | 2020-05-25 08:33:03 | 2020-05-25 08:59:03 | 0:26:00 | 0:13:23 | 0:12:37 | smithi | master | ubuntu | 18.04 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{more-partial-recovery.yaml} backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/on.yaml msgr-failures/osd-delay.yaml msgr/async.yaml objectstore/bluestore-hybrid.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} thrashers/pggrow.yaml thrashosds-health.yaml workloads/cache-agent-small.yaml} | 2 | |
pass | 5080002 | 2020-05-21 23:21:35 | 2020-05-25 08:33:09 | 2020-05-25 08:59:08 | 0:25:59 | 0:17:24 | 0:08:35 | smithi | master | rhel | 8.1 | rados/singleton-nomsgr/{all/msgr.yaml rados.yaml supported-random-distro$/{rhel_8.yaml}} | 1 | |
pass | 5080003 | 2020-05-21 23:21:36 | 2020-05-25 08:35:20 | 2020-05-25 09:01:20 | 0:26:00 | 0:12:16 | 0:13:44 | smithi | master | centos | 8.1 | rados/cephadm/smoke/{distro/centos_latest.yaml fixed-2.yaml start.yaml} | 2 | |
pass | 5080004 | 2020-05-21 23:21:37 | 2020-05-25 08:35:20 | 2020-05-25 08:59:20 | 0:24:00 | 0:11:23 | 0:12:37 | smithi | master | rhel | 8.1 | rados/singleton/{all/resolve_stuck_peering.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-comp-zstd.yaml rados.yaml supported-random-distro$/{rhel_8.yaml}} | 2 | |
pass | 5080005 | 2020-05-21 23:21:38 | 2020-05-25 08:37:08 | 2020-05-25 08:57:08 | 0:20:00 | 0:08:03 | 0:11:57 | smithi | master | ubuntu | 18.04 | rados/multimon/{clusters/9.yaml msgr-failures/many.yaml msgr/async-v1only.yaml no_pools.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} tasks/mon_clock_no_skews.yaml} | 3 | |
pass | 5080006 | 2020-05-21 23:21:39 | 2020-05-25 08:37:09 | 2020-05-25 08:57:08 | 0:19:59 | 0:11:12 | 0:08:47 | smithi | master | centos | 8.1 | rados/thrash-erasure-code-big/{ceph.yaml cluster/{12-osds.yaml openstack.yaml} msgr-failures/fastclose.yaml objectstore/bluestore-bitmap.yaml rados.yaml recovery-overrides/{more-async-partial-recovery.yaml} supported-random-distro$/{centos_8.yaml} thrashers/fastread.yaml thrashosds-health.yaml workloads/ec-rados-plugin=lrc-k=4-m=2-l=3.yaml} | 3 | |
pass | 5080007 | 2020-05-21 23:21:40 | 2020-05-25 08:37:09 | 2020-05-25 09:15:09 | 0:38:00 | 0:28:24 | 0:09:36 | smithi | master | centos | 8.1 | rados/verify/{centos_latest.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-thrash/none.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore-bitmap.yaml rados.yaml tasks/mon_recovery.yaml validater/valgrind.yaml} | 2 | |
fail | 5080008 | 2020-05-21 23:21:41 | 2020-05-25 08:37:09 | 2020-05-25 09:19:09 | 0:42:00 | 0:31:06 | 0:10:54 | smithi | master | centos | 8.1 | rados/cephadm/smoke-roleless/{distro/centos_latest.yaml start.yaml} | 2 | |
Failure Reason:
timeout expired in wait_until_healthy |
||||||||||||||
pass | 5080009 | 2020-05-21 23:21:42 | 2020-05-25 08:37:22 | 2020-05-25 08:57:22 | 0:20:00 | 0:10:34 | 0:09:26 | smithi | master | centos | 8.1 | rados/mgr/{clusters/{2-node-mgr.yaml} debug/mgr.yaml objectstore/bluestore-comp-lz4.yaml supported-random-distro$/{centos_8.yaml} tasks/crash.yaml} | 2 | |
pass | 5080010 | 2020-05-21 23:21:43 | 2020-05-25 08:43:10 | 2020-05-25 09:05:09 | 0:21:59 | 0:13:22 | 0:08:37 | smithi | master | ubuntu | 18.04 | rados/perf/{ceph.yaml objectstore/bluestore-comp.yaml openstack.yaml settings/optimized.yaml ubuntu_latest.yaml workloads/fio_4M_rand_rw.yaml} | 1 | |
pass | 5080011 | 2020-05-21 23:21:44 | 2020-05-25 08:43:10 | 2020-05-25 09:27:10 | 0:44:00 | 0:36:20 | 0:07:40 | smithi | master | rhel | 8.1 | rados/singleton-bluestore/{all/cephtool.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{rhel_8.yaml}} | 1 | |
pass | 5080012 | 2020-05-21 23:21:45 | 2020-05-25 08:45:07 | 2020-05-25 09:19:07 | 0:34:00 | 0:23:49 | 0:10:11 | smithi | master | ubuntu | 18.04 | rados/thrash-erasure-code-isa/{arch/x86_64.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/fastclose.yaml objectstore/bluestore-bitmap.yaml rados.yaml recovery-overrides/{more-active-recovery.yaml} supported-random-distro$/{ubuntu_latest.yaml} thrashers/mapgap.yaml thrashosds-health.yaml workloads/ec-rados-plugin=isa-k=2-m=1.yaml} | 2 | |
pass | 5080013 | 2020-05-21 23:21:46 | 2020-05-25 08:45:07 | 2020-05-25 09:55:08 | 1:10:01 | 1:01:33 | 0:08:28 | smithi | master | centos | 8.1 | rados/dashboard/{clusters/{2-node-mgr.yaml} debug/mgr.yaml objectstore/bluestore-low-osd-mem-target.yaml supported-random-distro$/{centos_8.yaml} tasks/dashboard.yaml} | 2 | |
pass | 5080014 | 2020-05-21 23:21:47 | 2020-05-25 08:45:07 | 2020-05-25 09:07:07 | 0:22:00 | 0:13:29 | 0:08:31 | smithi | master | ubuntu | 18.04 | rados/objectstore/{backends/keyvaluedb.yaml supported-random-distro$/{ubuntu_latest.yaml}} | 1 | |
pass | 5080015 | 2020-05-21 23:21:48 | 2020-05-25 08:45:07 | 2020-05-25 09:11:07 | 0:26:00 | 0:19:10 | 0:06:50 | smithi | master | centos | 8.1 | rados/valgrind-leaks/{1-start.yaml 2-inject-leak/osd.yaml centos_latest.yaml} | 1 | |
pass | 5080016 | 2020-05-21 23:21:49 | 2020-05-25 08:45:07 | 2020-05-25 10:17:08 | 1:32:01 | 1:20:49 | 0:11:12 | smithi | master | centos | 8.1 | rados/thrash-erasure-code/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/normal.yaml msgr-failures/fastclose.yaml objectstore/bluestore-bitmap.yaml rados.yaml recovery-overrides/{default.yaml} supported-random-distro$/{centos_8.yaml} thrashers/minsize_recovery.yaml thrashosds-health.yaml workloads/ec-radosbench.yaml} | 2 | |
pass | 5080017 | 2020-05-21 23:21:50 | 2020-05-25 08:47:20 | 2020-05-25 09:29:20 | 0:42:00 | 0:25:12 | 0:16:48 | smithi | master | ubuntu | 18.04 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{more-async-partial-recovery.yaml} backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/fastclose.yaml msgr/async-v1only.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} thrashers/careful.yaml thrashosds-health.yaml workloads/cache-pool-snaps-readproxy.yaml} | 2 | |
fail | 5080018 | 2020-05-21 23:21:51 | 2020-05-25 08:47:20 | 2020-05-25 09:37:20 | 0:50:00 | 0:37:21 | 0:12:39 | smithi | master | rhel | 8.0 | rados/cephadm/with-work/{distro/rhel_8.0.yaml fixed-2.yaml mode/root.yaml msgr/async.yaml start.yaml tasks/rados_api_tests.yaml} | 2 | |
Failure Reason:
'/home/ubuntu/cephtest/archive/syslog/misc.log:2020-05-25T09:17:50.295096+00:00 smithi102 bash[30759]: debug 2020-05-25T09:17:50.293+0000 7f0fe08dd700 -1 log_channel(cephadm) log [ERR] : cephadm exited with an error code: 1, stderr:INFO:cephadm:Deploy daemon prometheus.a ... ' in syslog |
||||||||||||||
pass | 5080019 | 2020-05-21 23:21:52 | 2020-05-25 08:47:20 | 2020-05-25 09:03:19 | 0:15:59 | 0:11:09 | 0:04:50 | smithi | master | centos | 8.1 | rados/singleton/{all/test-crash.yaml msgr-failures/many.yaml msgr/async.yaml objectstore/bluestore-hybrid.yaml rados.yaml supported-random-distro$/{centos_8.yaml}} | 1 | |
pass | 5080020 | 2020-05-21 23:21:53 | 2020-05-25 08:47:20 | 2020-05-25 09:29:20 | 0:42:00 | 0:12:14 | 0:29:46 | smithi | master | ubuntu | 18.04 | rados/thrash-erasure-code-shec/{ceph.yaml clusters/{fixed-4.yaml openstack.yaml} msgr-failures/osd-delay.yaml objectstore/bluestore-comp-snappy.yaml rados.yaml recovery-overrides/{default.yaml} supported-random-distro$/{ubuntu_latest.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/ec-rados-plugin=shec-k=4-m=3-c=2.yaml} | 4 | |
pass | 5080021 | 2020-05-21 23:21:54 | 2020-05-25 08:47:20 | 2020-05-25 09:33:20 | 0:46:00 | 0:31:09 | 0:14:51 | smithi | master | centos | 8.1 | rados/monthrash/{ceph.yaml clusters/9-mons.yaml msgr-failures/mon-delay.yaml msgr/async-v1only.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{centos_8.yaml} thrashers/one.yaml workloads/rados_api_tests.yaml} | 2 | |
pass | 5080022 | 2020-05-21 23:21:55 | 2020-05-25 08:49:04 | 2020-05-25 09:13:04 | 0:24:00 | 0:16:13 | 0:07:47 | smithi | master | rhel | 7.7 | rados/cephadm/smoke/{distro/rhel_7.yaml fixed-2.yaml start.yaml} | 2 | |
pass | 5080023 | 2020-05-21 23:21:56 | 2020-05-25 08:49:04 | 2020-05-25 09:21:04 | 0:32:00 | 0:25:04 | 0:06:56 | smithi | master | rhel | 8.1 | rados/singleton-nomsgr/{all/multi-backfill-reject.yaml rados.yaml supported-random-distro$/{rhel_8.yaml}} | 2 | |
pass | 5080024 | 2020-05-21 23:21:57 | 2020-05-25 08:49:10 | 2020-05-25 09:23:09 | 0:33:59 | 0:24:05 | 0:09:54 | smithi | master | centos | 8.1 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/on.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{centos_8.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/cache-pool-snaps.yaml} | 2 | |
fail | 5080025 | 2020-05-21 23:21:58 | 2020-05-25 08:49:15 | 2020-05-25 09:07:15 | 0:18:00 | 0:12:26 | 0:05:34 | smithi | master | centos | 8.1 | rados/singleton/{all/test_envlibrados_for_rocksdb.yaml msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{centos_8.yaml}} | 1 | |
Failure Reason:
Command failed (workunit test rados/test_envlibrados_for_rocksdb.sh) on smithi006 with status 2: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=617298c201f8bbb9c16ff0a43c13d6dd93f90f82 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test_envlibrados_for_rocksdb.sh' |
||||||||||||||
pass | 5080026 | 2020-05-21 23:21:59 | 2020-05-25 08:51:00 | 2020-05-25 09:25:00 | 0:34:00 | 0:27:41 | 0:06:19 | smithi | master | rhel | 8.1 | rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/many.yaml msgr/async-v1only.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{rhel_8.yaml} tasks/rados_api_tests.yaml} | 2 | |
fail | 5080027 | 2020-05-21 23:22:00 | 2020-05-25 08:51:00 | 2020-05-25 09:23:00 | 0:32:00 | 0:24:26 | 0:07:34 | smithi | master | rhel | 7.7 | rados/cephadm/smoke-roleless/{distro/rhel_7.yaml start.yaml} | 2 | |
Failure Reason:
'/home/ubuntu/cephtest/archive/syslog/misc.log:2020-05-25T09:10:14.635348+00:00 smithi031 bash: debug 2020-05-25T09:10:14.633+0000 7f2bc4a16700 -1 log_channel(cephadm) log [ERR] : cephadm exited with an error code: 1, stderr:INFO:cephadm:Deploy daemon alertmanager.smithi031 ... ' in syslog |
||||||||||||||
pass | 5080028 | 2020-05-21 23:22:01 | 2020-05-25 08:51:00 | 2020-05-25 11:19:03 | 2:28:03 | 2:21:51 | 0:06:12 | smithi | master | rhel | 8.1 | rados/standalone/{supported-random-distro$/{rhel_8.yaml} workloads/osd.yaml} | 1 | |
pass | 5080029 | 2020-05-21 23:22:02 | 2020-05-25 08:51:03 | 2020-05-25 09:11:02 | 0:19:59 | 0:11:17 | 0:08:42 | smithi | master | ubuntu | 18.04 | rados/perf/{ceph.yaml objectstore/bluestore-low-osd-mem-target.yaml openstack.yaml settings/optimized.yaml ubuntu_latest.yaml workloads/fio_4M_rand_write.yaml} | 1 | |
pass | 5080030 | 2020-05-21 23:22:03 | 2020-05-25 08:51:04 | 2020-05-25 09:05:04 | 0:14:00 | 0:06:17 | 0:07:43 | smithi | master | centos | 8.1 | rados/cephadm/workunits/{distro/centos_latest.yaml task/test_cephadm_repos.yaml} | 1 | |
fail | 5080031 | 2020-05-21 23:22:04 | 2020-05-25 08:53:06 | 2020-05-25 09:31:06 | 0:38:00 | 0:24:28 | 0:13:32 | smithi | master | centos | 7.6 | rados/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-install/mimic-v1only.yaml backoff/peering_and_degraded.yaml ceph.yaml clusters/{openstack.yaml three-plus-one.yaml} d-balancer/crush-compat.yaml distro$/{centos_7.6.yaml} msgr-failures/osd-delay.yaml rados.yaml thrashers/default.yaml thrashosds-health.yaml workloads/radosbench.yaml} | 3 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
pass | 5080032 | 2020-05-21 23:22:04 | 2020-05-25 08:53:06 | 2020-05-25 10:05:07 | 1:12:01 | 1:04:04 | 0:07:57 | smithi | master | centos | 8.1 | rados/singleton/{all/thrash-backfill-full.yaml msgr-failures/many.yaml msgr/async-v2only.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{centos_8.yaml}} | 2 | |
fail | 5080033 | 2020-05-21 23:22:05 | 2020-05-25 08:53:06 | 2020-05-25 09:27:06 | 0:34:00 | 0:26:46 | 0:07:14 | smithi | master | rhel | 8.1 | rados/cephadm/with-work/{distro/rhel_latest.yaml fixed-2.yaml mode/packaged.yaml msgr/async-v1only.yaml start.yaml tasks/rados_python.yaml} | 2 | |
Failure Reason:
'/home/ubuntu/cephtest/archive/syslog/misc.log:2020-05-25T09:15:44.585509+00:00 smithi086 bash[25741]: debug 2020-05-25T09:15:44.584+0000 7fa24157d700 -1 log_channel(cephadm) log [ERR] : cephadm exited with an error code: 1, stderr:INFO:cephadm:Deploy daemon prometheus.a ... ' in syslog |
||||||||||||||
pass | 5080034 | 2020-05-21 23:22:06 | 2020-05-25 08:53:06 | 2020-05-25 09:27:06 | 0:34:00 | 0:20:18 | 0:13:42 | smithi | master | ubuntu | 18.04 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{more-async-recovery.yaml} backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/osd-delay.yaml msgr/async.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} thrashers/mapgap.yaml thrashosds-health.yaml workloads/cache-snaps-balanced.yaml} | 2 | |
pass | 5080035 | 2020-05-21 23:22:07 | 2020-05-25 08:55:11 | 2020-05-25 09:23:10 | 0:27:59 | 0:08:05 | 0:19:54 | smithi | master | ubuntu | 18.04 | rados/multimon/{clusters/21.yaml msgr-failures/few.yaml msgr/async-v2only.yaml no_pools.yaml objectstore/bluestore-comp-lz4.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} tasks/mon_clock_with_skews.yaml} | 3 | |
pass | 5080036 | 2020-05-21 23:22:08 | 2020-05-25 08:55:11 | 2020-05-25 09:17:11 | 0:22:00 | 0:15:09 | 0:06:51 | smithi | master | rhel | 8.0 | rados/cephadm/smoke/{distro/rhel_8.0.yaml fixed-2.yaml start.yaml} | 2 | |
pass | 5080037 | 2020-05-21 23:22:09 | 2020-05-25 08:55:11 | 2020-05-25 09:31:11 | 0:36:00 | 0:24:31 | 0:11:29 | smithi | master | centos | 8.1 | rados/thrash-erasure-code-big/{ceph.yaml cluster/{12-osds.yaml openstack.yaml} msgr-failures/few.yaml objectstore/bluestore-comp-lz4.yaml rados.yaml recovery-overrides/{more-async-partial-recovery.yaml} supported-random-distro$/{centos_8.yaml} thrashers/mapgap.yaml thrashosds-health.yaml workloads/ec-rados-plugin=jerasure-k=4-m=2.yaml} | 3 | |
pass | 5080038 | 2020-05-21 23:22:10 | 2020-05-25 08:55:16 | 2020-05-25 09:25:16 | 0:30:00 | 0:24:06 | 0:05:54 | smithi | master | centos | 8.1 | rados/verify/{centos_latest.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-thrash/default/{default.yaml thrashosds-health.yaml} msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/bluestore-comp-lz4.yaml rados.yaml tasks/rados_api_tests.yaml validater/lockdep.yaml} | 2 | |
pass | 5080039 | 2020-05-21 23:22:11 | 2020-05-25 08:57:10 | 2020-05-25 09:17:09 | 0:19:59 | 0:12:12 | 0:07:47 | smithi | master | centos | 8.1 | rados/mgr/{clusters/{2-node-mgr.yaml} debug/mgr.yaml objectstore/bluestore-comp-snappy.yaml supported-random-distro$/{centos_8.yaml} tasks/failover.yaml} | 2 | |
pass | 5080040 | 2020-05-21 23:22:12 | 2020-05-25 08:57:10 | 2020-05-25 09:27:10 | 0:30:00 | 0:21:55 | 0:08:05 | smithi | master | ubuntu | 18.04 | rados/singleton-nomsgr/{all/osd_stale_reads.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml}} | 1 | |
pass | 5080041 | 2020-05-21 23:22:13 | 2020-05-25 08:57:10 | 2020-05-25 09:35:10 | 0:38:00 | 0:26:11 | 0:11:49 | smithi | master | ubuntu | 18.04 | rados/thrash-erasure-code-isa/{arch/x86_64.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/few.yaml objectstore/bluestore-comp-lz4.yaml rados.yaml recovery-overrides/{more-async-partial-recovery.yaml} supported-random-distro$/{ubuntu_latest.yaml} thrashers/morepggrow.yaml thrashosds-health.yaml workloads/ec-rados-plugin=isa-k=2-m=1.yaml} | 2 | |
pass | 5080042 | 2020-05-21 23:22:14 | 2020-05-25 08:57:11 | 2020-05-25 09:43:11 | 0:46:00 | 0:37:25 | 0:08:35 | smithi | master | centos | 8.1 | rados/singleton/{all/thrash-eio.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{centos_8.yaml}} | 2 | |
pass | 5080043 | 2020-05-21 23:22:15 | 2020-05-25 08:57:10 | 2020-05-25 09:29:10 | 0:32:00 | 0:21:48 | 0:10:12 | smithi | master | ubuntu | 18.04 | rados/thrash-erasure-code/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/fast.yaml msgr-failures/few.yaml objectstore/bluestore-comp-lz4.yaml rados.yaml recovery-overrides/{more-async-recovery.yaml} supported-random-distro$/{ubuntu_latest.yaml} thrashers/morepggrow.yaml thrashosds-health.yaml workloads/ec-small-objects-balanced.yaml} | 2 | |
fail | 5080044 | 2020-05-21 23:22:16 | 2020-05-25 08:57:16 | 2020-05-25 09:41:17 | 0:44:01 | 0:34:01 | 0:10:00 | smithi | master | rhel | 8.0 | rados/cephadm/smoke-roleless/{distro/rhel_8.0.yaml start.yaml} | 2 | |
Failure Reason:
timeout expired in wait_until_healthy |
||||||||||||||
pass | 5080045 | 2020-05-21 23:22:17 | 2020-05-25 08:57:23 | 2020-05-25 09:17:23 | 0:20:00 | 0:11:51 | 0:08:09 | smithi | master | ubuntu | 18.04 | rados/perf/{ceph.yaml objectstore/bluestore-stupid.yaml openstack.yaml settings/optimized.yaml ubuntu_latest.yaml workloads/radosbench_4K_rand_read.yaml} | 1 | |
pass | 5080046 | 2020-05-21 23:22:18 | 2020-05-25 08:57:32 | 2020-05-25 09:31:32 | 0:34:00 | 0:22:28 | 0:11:32 | smithi | master | ubuntu | 18.04 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/on.yaml msgr-failures/fastclose.yaml msgr/async-v1only.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} thrashers/morepggrow.yaml thrashosds-health.yaml workloads/cache-snaps.yaml} | 2 | |
pass | 5080047 | 2020-05-21 23:22:19 | 2020-05-25 08:59:18 | 2020-05-25 09:35:18 | 0:36:00 | 0:28:11 | 0:07:49 | smithi | master | rhel | 8.1 | rados/thrash-erasure-code-overwrites/{bluestore-bitmap.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/fast.yaml msgr-failures/osd-delay.yaml rados.yaml recovery-overrides/{more-async-recovery.yaml} supported-random-distro$/{rhel_8.yaml} thrashers/fastread.yaml thrashosds-health.yaml workloads/ec-snaps-few-objects-overwrites.yaml} | 2 | |
pass | 5080048 | 2020-05-21 23:22:19 | 2020-05-25 08:59:18 | 2020-05-25 09:19:17 | 0:19:59 | 0:11:31 | 0:08:28 | smithi | master | centos | 8.1 | rados/thrash-erasure-code-shec/{ceph.yaml clusters/{fixed-4.yaml openstack.yaml} msgr-failures/fastclose.yaml objectstore/bluestore-comp-zlib.yaml rados.yaml recovery-overrides/{more-active-recovery.yaml} supported-random-distro$/{centos_8.yaml} thrashers/careful.yaml thrashosds-health.yaml workloads/ec-rados-plugin=shec-k=4-m=3-c=2.yaml} | 4 | |
pass | 5080049 | 2020-05-21 23:22:20 | 2020-05-25 08:59:18 | 2020-05-25 10:11:19 | 1:12:01 | 1:01:44 | 0:10:17 | smithi | master | ubuntu | 18.04 | rados/dashboard/{clusters/{2-node-mgr.yaml} debug/mgr.yaml objectstore/bluestore-stupid.yaml supported-random-distro$/{ubuntu_latest.yaml} tasks/dashboard.yaml} | 2 | |
pass | 5080050 | 2020-05-21 23:22:21 | 2020-05-25 08:59:21 | 2020-05-25 09:31:21 | 0:32:00 | 0:24:29 | 0:07:31 | smithi | master | centos | 8.1 | rados/objectstore/{backends/objectcacher-stress.yaml supported-random-distro$/{centos_8.yaml}} | 1 | |
fail | 5080051 | 2020-05-21 23:22:22 | 2020-05-25 09:01:02 | 2020-05-25 09:21:02 | 0:20:00 | 0:09:23 | 0:10:37 | smithi | master | ubuntu | 18.04 | rados/cephadm/workunits/{distro/ubuntu_18.04_podman.yaml task/test_adoption.yaml} | 1 | |
Failure Reason:
Command failed (workunit test cephadm/test_adoption.sh) on smithi071 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=617298c201f8bbb9c16ff0a43c13d6dd93f90f82 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_adoption.sh' |
||||||||||||||
pass | 5080052 | 2020-05-21 23:22:23 | 2020-05-25 09:01:02 | 2020-05-25 09:53:03 | 0:52:01 | 0:41:17 | 0:10:44 | smithi | master | ubuntu | 18.04 | rados/monthrash/{ceph.yaml clusters/3-mons.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-comp-lz4.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} thrashers/sync-many.yaml workloads/rados_mon_osdmap_prune.yaml} | 2 | |
pass | 5080053 | 2020-05-21 23:22:24 | 2020-05-25 09:01:02 | 2020-05-25 09:33:02 | 0:32:00 | 0:19:32 | 0:12:28 | smithi | master | ubuntu | 18.04 | rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} tasks/rados_cls_all.yaml} | 2 | |
pass | 5080054 | 2020-05-21 23:22:25 | 2020-05-25 09:01:03 | 2020-05-25 09:31:03 | 0:30:00 | 0:21:25 | 0:08:35 | smithi | master | centos | 8.1 | rados/singleton/{all/thrash-rados/{thrash-rados.yaml thrashosds-health.yaml} msgr-failures/many.yaml msgr/async-v1only.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{centos_8.yaml}} | 2 | |
fail | 5080055 | 2020-05-21 23:22:26 | 2020-05-25 09:01:07 | 2020-05-25 09:47:07 | 0:46:00 | 0:32:33 | 0:13:27 | smithi | master | ubuntu | 18.04 | rados/cephadm/with-work/{distro/ubuntu_18.04.yaml fixed-2.yaml mode/root.yaml msgr/async-v2only.yaml start.yaml tasks/rados_api_tests.yaml} | 2 | |
Failure Reason:
'/home/ubuntu/cephtest/archive/syslog/misc.log:2020-05-25T09:25:00.089382+00:00 smithi188 bash[13454]: debug 2020-05-25T09:25:00.083+0000 7f85e4074700 -1 log_channel(cephadm) log [ERR] : cephadm exited with an error code: 1, stderr:INFO:cephadm:Deploy daemon prometheus.a ... ' in syslog |
||||||||||||||
pass | 5080056 | 2020-05-21 23:22:27 | 2020-05-25 09:01:14 | 2020-05-25 09:19:14 | 0:18:00 | 0:10:35 | 0:07:25 | smithi | master | centos | 8.1 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{more-async-partial-recovery.yaml} backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-comp-lz4.yaml rados.yaml supported-random-distro$/{centos_8.yaml} thrashers/none.yaml thrashosds-health.yaml workloads/cache.yaml} | 2 | |
pass | 5080057 | 2020-05-21 23:22:28 | 2020-05-25 09:01:14 | 2020-05-25 09:23:14 | 0:22:00 | 0:13:20 | 0:08:40 | smithi | master | rhel | 8.1 | rados/cephadm/smoke/{distro/rhel_latest.yaml fixed-2.yaml start.yaml} | 2 | |
pass | 5080058 | 2020-05-21 23:22:28 | 2020-05-25 09:01:14 | 2020-05-25 09:43:15 | 0:42:01 | 0:29:11 | 0:12:50 | smithi | master | ubuntu | 18.04 | rados/singleton/{all/thrash_cache_writeback_proxy_none.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-comp-lz4.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml}} | 2 | |
pass | 5080059 | 2020-05-21 23:22:29 | 2020-05-25 09:01:17 | 2020-05-25 09:21:16 | 0:19:59 | 0:12:26 | 0:07:33 | smithi | master | rhel | 8.1 | rados/singleton-nomsgr/{all/pool-access.yaml rados.yaml supported-random-distro$/{rhel_8.yaml}} | 1 | |
pass | 5080060 | 2020-05-21 23:22:30 | 2020-05-25 09:01:21 | 2020-05-25 09:23:21 | 0:22:00 | 0:10:43 | 0:11:17 | smithi | master | ubuntu | 18.04 | rados/perf/{ceph.yaml objectstore/bluestore-basic-min-osd-mem-target.yaml openstack.yaml settings/optimized.yaml ubuntu_latest.yaml workloads/radosbench_4K_seq_read.yaml} | 1 | |
fail | 5080061 | 2020-05-21 23:22:31 | 2020-05-25 09:03:09 | 2020-05-25 09:31:09 | 0:28:00 | 0:18:53 | 0:09:07 | smithi | master | rhel | 8.1 | rados/cephadm/smoke-roleless/{distro/rhel_latest.yaml start.yaml} | 2 | |
Failure Reason:
'/home/ubuntu/cephtest/archive/syslog/misc.log:2020-05-25T09:20:18.460426+00:00 smithi077 bash[21974]: debug 2020-05-25T09:20:18.458+0000 7f13ed886700 -1 log_channel(cephadm) log [ERR] : cephadm exited with an error code: 1, stderr:INFO:cephadm:Deploy daemon alertmanager.smithi077 ... ' in syslog |
||||||||||||||
pass | 5080062 | 2020-05-21 23:22:32 | 2020-05-25 09:03:10 | 2020-05-25 09:37:11 | 0:34:01 | 0:20:26 | 0:13:35 | smithi | master | centos | 7.6 | rados/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-install/mimic.yaml backoff/normal.yaml ceph.yaml clusters/{openstack.yaml three-plus-one.yaml} d-balancer/on.yaml distro$/{centos_7.6.yaml} msgr-failures/fastclose.yaml rados.yaml thrashers/mapgap.yaml thrashosds-health.yaml workloads/rbd_cls.yaml} | 3 | |
pass | 5080063 | 2020-05-21 23:22:33 | 2020-05-25 09:03:21 | 2020-05-25 09:27:20 | 0:23:59 | 0:14:43 | 0:09:16 | smithi | master | rhel | 8.1 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/on.yaml msgr-failures/osd-delay.yaml msgr/async.yaml objectstore/bluestore-comp-snappy.yaml rados.yaml supported-random-distro$/{rhel_8.yaml} thrashers/pggrow.yaml thrashosds-health.yaml workloads/dedup_tier.yaml} | 2 | |
fail | 5080064 | 2020-05-21 23:22:34 | 2020-05-25 09:05:09 | 2020-05-25 09:35:09 | 0:30:00 | 0:12:31 | 0:17:29 | smithi | master | ubuntu | 18.04 | rados/cephadm/smoke/{distro/ubuntu_18.04.yaml fixed-2.yaml start.yaml} | 2 | |
Failure Reason:
'/home/ubuntu/cephtest/archive/syslog/misc.log:2020-05-25T09:31:54.497448+00:00 smithi198 bash[10630]: debug 2020-05-25T09:31:54.492+0000 7f0bb7cc6700 -1 log_channel(cephadm) log [ERR] : cephadm exited with an error code: 1, stderr:INFO:cephadm:Deploy daemon prometheus.a ... ' in syslog |
||||||||||||||
pass | 5080065 | 2020-05-21 23:22:35 | 2020-05-25 09:05:09 | 2020-05-25 09:21:08 | 0:15:59 | 0:07:58 | 0:08:01 | smithi | master | ubuntu | 18.04 | rados/singleton/{all/watch-notify-same-primary.yaml msgr-failures/many.yaml msgr/async.yaml objectstore/bluestore-comp-snappy.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml}} | 1 | |
pass | 5080066 | 2020-05-21 23:22:35 | 2020-05-25 09:05:09 | 2020-05-25 09:23:08 | 0:17:59 | 0:10:45 | 0:07:14 | smithi | master | centos | 8.1 | rados/multimon/{clusters/3.yaml msgr-failures/many.yaml msgr/async.yaml no_pools.yaml objectstore/bluestore-comp-snappy.yaml rados.yaml supported-random-distro$/{centos_8.yaml} tasks/mon_recovery.yaml} | 2 | |
pass | 5080067 | 2020-05-21 23:22:36 | 2020-05-25 09:05:09 | 2020-05-25 09:33:09 | 0:28:00 | 0:14:33 | 0:13:27 | smithi | master | rhel | 8.1 | rados/mgr/{clusters/{2-node-mgr.yaml} debug/mgr.yaml objectstore/bluestore-comp-zlib.yaml supported-random-distro$/{rhel_8.yaml} tasks/insights.yaml} | 2 | |
fail | 5080068 | 2020-05-21 23:22:37 | 2020-05-25 09:05:09 | 2020-05-25 09:59:10 | 0:54:01 | 0:48:38 | 0:05:23 | smithi | master | centos | 8.1 | rados/standalone/{supported-random-distro$/{centos_8.yaml} workloads/scrub.yaml} | 1 | |
Failure Reason:
Command failed (workunit test scrub/osd-scrub-repair.sh) on smithi022 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=617298c201f8bbb9c16ff0a43c13d6dd93f90f82 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/standalone/scrub/osd-scrub-repair.sh' |
||||||||||||||
pass | 5080069 | 2020-05-21 23:22:38 | 2020-05-25 09:05:09 | 2020-05-25 09:49:10 | 0:44:01 | 0:13:54 | 0:30:07 | smithi | master | rhel | 8.1 | rados/thrash-erasure-code-big/{ceph.yaml cluster/{12-osds.yaml openstack.yaml} msgr-failures/osd-delay.yaml objectstore/bluestore-comp-snappy.yaml rados.yaml recovery-overrides/{more-partial-recovery.yaml} supported-random-distro$/{rhel_8.yaml} thrashers/morepggrow.yaml thrashosds-health.yaml workloads/ec-rados-plugin=lrc-k=4-m=2-l=3.yaml} | 3 | |
fail | 5080070 | 2020-05-21 23:22:39 | 2020-05-25 09:05:09 | 2020-05-25 11:21:12 | 2:16:03 | 2:04:56 | 0:11:07 | smithi | master | centos | 8.1 | rados/verify/{centos_latest.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-thrash/none.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-comp-snappy.yaml rados.yaml tasks/rados_cls_all.yaml validater/valgrind.yaml} | 2 | |
Failure Reason:
saw valgrind issues |
||||||||||||||
fail | 5080071 | 2020-05-21 23:22:40 | 2020-05-25 09:05:10 | 2020-05-25 09:39:11 | 0:34:01 | 0:13:02 | 0:20:59 | smithi | master | ubuntu | 18.04 | rados/cephadm/smoke-roleless/{distro/ubuntu_18.04.yaml start.yaml} | 2 | |
Failure Reason:
'/home/ubuntu/cephtest/archive/syslog/misc.log:2020-05-25T09:31:14.231161+00:00 smithi162 bash[10489]: debug 2020-05-25T09:31:14.224+0000 7fcdf2ba2700 -1 log_channel(cephadm) log [ERR] : cephadm exited with an error code: 1, stderr:INFO:cephadm:Deploy daemon alertmanager.smithi162 ... ' in syslog |
||||||||||||||
pass | 5080072 | 2020-05-21 23:22:41 | 2020-05-25 09:05:14 | 2020-05-25 09:37:14 | 0:32:00 | 0:25:09 | 0:06:51 | smithi | master | rhel | 8.1 | rados/thrash-erasure-code-isa/{arch/x86_64.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/osd-delay.yaml objectstore/bluestore-comp-snappy.yaml rados.yaml recovery-overrides/{more-active-recovery.yaml} supported-random-distro$/{rhel_8.yaml} thrashers/none.yaml thrashosds-health.yaml workloads/ec-rados-plugin=isa-k=2-m=1.yaml} | 2 | |
pass | 5080073 | 2020-05-21 23:22:42 | 2020-05-25 09:05:15 | 2020-05-25 09:37:14 | 0:31:59 | 0:24:21 | 0:07:38 | smithi | master | rhel | 8.1 | rados/thrash-erasure-code/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/normal.yaml msgr-failures/osd-delay.yaml objectstore/bluestore-comp-snappy.yaml rados.yaml recovery-overrides/{more-active-recovery.yaml} supported-random-distro$/{rhel_8.yaml} thrashers/pggrow.yaml thrashosds-health.yaml workloads/ec-small-objects-fast-read.yaml} | 2 | |
pass | 5080074 | 2020-05-21 23:22:43 | 2020-05-25 09:07:16 | 2020-05-25 09:35:15 | 0:27:59 | 0:13:43 | 0:14:16 | smithi | master | rhel | 8.1 | rados/thrash-erasure-code-shec/{ceph.yaml clusters/{fixed-4.yaml openstack.yaml} msgr-failures/few.yaml objectstore/bluestore-comp-zstd.yaml rados.yaml recovery-overrides/{more-async-recovery.yaml} supported-random-distro$/{rhel_8.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/ec-rados-plugin=shec-k=4-m=3-c=2.yaml} | 4 | |
pass | 5080075 | 2020-05-21 23:22:44 | 2020-05-25 09:07:16 | 2020-05-25 09:33:15 | 0:25:59 | 0:18:18 | 0:07:41 | smithi | master | rhel | 8.1 | rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/many.yaml msgr/async.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{rhel_8.yaml} tasks/rados_python.yaml} | 2 | |
fail | 5080076 | 2020-05-21 23:22:45 | 2020-05-25 09:07:16 | 2020-05-25 09:31:16 | 0:24:00 | 0:12:50 | 0:11:10 | smithi | master | centos | 8.1 | rados/cephadm/workunits/{distro/centos_latest.yaml task/test_cephadm.yaml} | 1 | |
Failure Reason:
Command failed (workunit test cephadm/test_cephadm.sh) on smithi187 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=617298c201f8bbb9c16ff0a43c13d6dd93f90f82 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_cephadm.sh' |
||||||||||||||
pass | 5080077 | 2020-05-21 23:22:46 | 2020-05-25 09:07:16 | 2020-05-25 09:25:15 | 0:17:59 | 0:07:55 | 0:10:04 | smithi | master | centos | 8.1 | rados/singleton/{all/admin-socket.yaml msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/bluestore-comp-zlib.yaml rados.yaml supported-random-distro$/{centos_8.yaml}} | 1 | |
pass | 5080078 | 2020-05-21 23:22:47 | 2020-05-25 09:07:16 | 2020-05-25 09:33:16 | 0:26:00 | 0:11:40 | 0:14:20 | smithi | master | ubuntu | 18.04 | rados/perf/{ceph.yaml objectstore/bluestore-bitmap.yaml openstack.yaml settings/optimized.yaml ubuntu_latest.yaml workloads/radosbench_4M_rand_read.yaml} | 1 | |
pass | 5080079 | 2020-05-21 23:22:47 | 2020-05-25 09:08:13 | 2020-05-25 10:04:13 | 0:56:00 | 0:34:53 | 0:21:07 | smithi | master | ubuntu | 18.04 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/fastclose.yaml msgr/async-v1only.yaml objectstore/bluestore-comp-zlib.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} thrashers/careful.yaml thrashosds-health.yaml workloads/pool-snaps-few-objects.yaml} | 2 | |
pass | 5080080 | 2020-05-21 23:22:48 | 2020-05-25 09:09:08 | 2020-05-25 09:47:08 | 0:38:00 | 0:30:50 | 0:07:10 | smithi | master | rhel | 8.1 | rados/singleton-nomsgr/{all/recovery-unfound-found.yaml rados.yaml supported-random-distro$/{rhel_8.yaml}} | 1 | |
pass | 5080081 | 2020-05-21 23:22:49 | 2020-05-25 09:11:19 | 2020-05-25 09:55:19 | 0:44:00 | 0:31:23 | 0:12:37 | smithi | master | rhel | 8.1 | rados/monthrash/{ceph.yaml clusters/9-mons.yaml msgr-failures/mon-delay.yaml msgr/async.yaml objectstore/bluestore-comp-snappy.yaml rados.yaml supported-random-distro$/{rhel_8.yaml} thrashers/sync.yaml workloads/rados_mon_workunits.yaml} | 2 | |
pass | 5080082 | 2020-05-21 23:22:50 | 2020-05-25 09:11:19 | 2020-05-25 09:55:19 | 0:44:00 | 0:32:58 | 0:11:02 | smithi | master | centos | 8.1 | rados/singleton-bluestore/{all/cephtool.yaml msgr-failures/many.yaml msgr/async-v1only.yaml objectstore/bluestore-comp-lz4.yaml rados.yaml supported-random-distro$/{centos_8.yaml}} | 1 | |
pass | 5080083 | 2020-05-21 23:22:51 | 2020-05-25 09:11:19 | 2020-05-25 10:27:20 | 1:16:01 | 1:00:28 | 0:15:33 | smithi | master | ubuntu | 18.04 | rados/dashboard/{clusters/{2-node-mgr.yaml} debug/mgr.yaml objectstore/filestore-xfs.yaml supported-random-distro$/{ubuntu_latest.yaml} tasks/dashboard.yaml} | 2 | |
fail | 5080084 | 2020-05-21 23:22:52 | 2020-05-25 09:12:49 | 2020-05-25 11:38:52 | 2:26:03 | 1:59:24 | 0:26:39 | smithi | master | rhel | 8.1 | rados/objectstore/{backends/objectstore.yaml supported-random-distro$/{rhel_8.yaml}} | 1 | |
Failure Reason:
Command crashed: 'sudo TESTDIR=/home/ubuntu/cephtest bash -c \'mkdir $TESTDIR/archive/ostest && cd $TESTDIR/archive/ostest && ulimit -Sn 16384 && CEPH_ARGS="--no-log-to-stderr --log-file $TESTDIR/archive/ceph_test_objectstore.log --debug-filestore 20 --debug-bluestore 20" ceph_test_objectstore --gtest_filter=-*/3 --gtest_catch_exceptions=0\'' |
||||||||||||||
fail | 5080085 | 2020-05-21 23:22:53 | 2020-05-25 09:13:05 | 2020-05-25 09:47:05 | 0:34:00 | 0:24:08 | 0:09:52 | smithi | master | ubuntu | 18.04 | rados/cephadm/with-work/{distro/ubuntu_18.04_podman.yaml fixed-2.yaml mode/packaged.yaml msgr/async.yaml start.yaml tasks/rados_python.yaml} | 2 | |
Failure Reason:
'/home/ubuntu/cephtest/archive/syslog/misc.log:2020-05-25T09:35:34.668558+00:00 smithi106 bash[18100]: debug 2020-05-25T09:35:34.666+0000 7fc018323700 -1 log_channel(cephadm) log [ERR] : cephadm exited with an error code: 1, stderr:INFO:cephadm:Deploy daemon prometheus.a ... ' in syslog |
||||||||||||||
pass | 5080086 | 2020-05-21 23:22:54 | 2020-05-25 09:15:05 | 2020-05-25 09:35:04 | 0:19:59 | 0:12:31 | 0:07:28 | smithi | master | rhel | 8.1 | rados/singleton/{all/deduptool.yaml msgr-failures/many.yaml msgr/async-v2only.yaml objectstore/bluestore-comp-zstd.yaml rados.yaml supported-random-distro$/{rhel_8.yaml}} | 1 | |
fail | 5080087 | 2020-05-21 23:22:55 | 2020-05-25 09:15:05 | 2020-05-25 09:39:04 | 0:23:59 | 0:12:53 | 0:11:06 | smithi | master | ubuntu | 18.04 | rados/cephadm/smoke/{distro/ubuntu_18.04_podman.yaml fixed-2.yaml start.yaml} | 2 | |
Failure Reason:
'/home/ubuntu/cephtest/archive/syslog/misc.log:2020-05-25T09:35:32.089165+00:00 smithi121 bash[14971]: debug 2020-05-25T09:35:32.082+0000 7fa4364ba700 -1 log_channel(cephadm) log [ERR] : cephadm exited with an error code: 1, stderr:INFO:cephadm:Deploy daemon prometheus.a ... ' in syslog |
||||||||||||||
pass | 5080088 | 2020-05-21 23:22:55 | 2020-05-25 09:15:10 | 2020-05-25 09:51:10 | 0:36:00 | 0:25:37 | 0:10:23 | smithi | master | centos | 8.1 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{more-partial-recovery.yaml} backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/on.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-comp-zstd.yaml rados.yaml supported-random-distro$/{centos_8.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/rados_api_tests.yaml} | 2 | |
pass | 5080089 | 2020-05-21 23:22:56 | 2020-05-25 09:17:19 | 2020-05-25 09:51:18 | 0:33:59 | 0:25:03 | 0:08:56 | smithi | master | ubuntu | 18.04 | rados/thrash-erasure-code-overwrites/{bluestore-bitmap.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/normal.yaml msgr-failures/fastclose.yaml rados.yaml recovery-overrides/{default.yaml} supported-random-distro$/{ubuntu_latest.yaml} thrashers/minsize_recovery.yaml thrashosds-health.yaml workloads/ec-pool-snaps-few-objects-overwrites.yaml} | 2 | |
fail | 5080090 | 2020-05-21 23:22:57 | 2020-05-25 09:17:19 | 2020-05-25 09:39:18 | 0:21:59 | 0:12:22 | 0:09:37 | smithi | master | ubuntu | 18.04 | rados/cephadm/smoke-roleless/{distro/ubuntu_18.04_podman.yaml start.yaml} | 2 | |
Failure Reason:
'/home/ubuntu/cephtest/archive/syslog/misc.log:2020-05-25T09:32:36.076159+00:00 smithi100 bash[13535]: debug 2020-05-25T09:32:36.070+0000 7f325f13c700 -1 log_channel(cephadm) log [ERR] : cephadm exited with an error code: 1, stderr:INFO:cephadm:Deploy daemon alertmanager.smithi100 ... ' in syslog |
||||||||||||||
pass | 5080091 | 2020-05-21 23:22:58 | 2020-05-25 09:17:19 | 2020-05-25 09:37:18 | 0:19:59 | 0:11:44 | 0:08:15 | smithi | master | ubuntu | 18.04 | rados/perf/{ceph.yaml objectstore/bluestore-comp.yaml openstack.yaml settings/optimized.yaml ubuntu_latest.yaml workloads/radosbench_4M_seq_read.yaml} | 1 | |
pass | 5080092 | 2020-05-21 23:22:59 | 2020-05-25 09:17:24 | 2020-05-25 09:33:24 | 0:16:00 | 0:09:06 | 0:06:54 | smithi | master | centos | 8.1 | rados/singleton/{all/divergent_priors.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore-hybrid.yaml rados.yaml supported-random-distro$/{centos_8.yaml}} | 1 | |
fail | 5080093 | 2020-05-21 23:23:00 | 2020-05-25 09:19:08 | 2020-05-25 09:59:08 | 0:40:00 | 0:33:03 | 0:06:57 | smithi | master | centos | 8.0 | rados/cephadm/with-work/{distro/centos_8.0.yaml fixed-2.yaml mode/root.yaml msgr/async.yaml start.yaml tasks/rados_api_tests.yaml} | 2 | |
Failure Reason:
'/home/ubuntu/cephtest/archive/syslog/misc.log:2020-05-25T09:39:16.982702+00:00 smithi019 bash[25457]: debug 2020-05-25T09:39:16.981+0000 7f529cf6e700 -1 log_channel(cephadm) log [ERR] : cephadm exited with an error code: 1, stderr:INFO:cephadm:Deploy daemon prometheus.a ... ' in syslog |
||||||||||||||
pass | 5080094 | 2020-05-21 23:23:01 | 2020-05-25 09:19:08 | 2020-05-25 09:39:08 | 0:20:00 | 0:10:26 | 0:09:34 | smithi | master | ubuntu | 18.04 | rados/singleton-nomsgr/{all/version-number-sanity.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml}} | 1 | |
fail | 5080095 | 2020-05-21 23:23:02 | 2020-05-25 09:19:10 | 2020-05-25 10:03:11 | 0:44:01 | 0:24:42 | 0:19:19 | smithi | master | centos | 7.6 | rados/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-install/nautilus-v1only.yaml backoff/peering.yaml ceph.yaml clusters/{openstack.yaml three-plus-one.yaml} d-balancer/crush-compat.yaml distro$/{centos_7.6.yaml} msgr-failures/few.yaml rados.yaml thrashers/morepggrow.yaml thrashosds-health.yaml workloads/snaps-few-objects.yaml} | 3 | |
Failure Reason:
reached maximum tries (180) after waiting for 180 seconds |
||||||||||||||
pass | 5080096 | 2020-05-21 23:23:02 | 2020-05-25 09:19:10 | 2020-05-25 09:39:10 | 0:20:00 | 0:07:27 | 0:12:33 | smithi | master | ubuntu | 18.04 | rados/multimon/{clusters/6.yaml msgr-failures/few.yaml msgr/async-v1only.yaml no_pools.yaml objectstore/bluestore-comp-zlib.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} tasks/mon_clock_no_skews.yaml} | 2 | |
pass | 5080097 | 2020-05-21 23:23:03 | 2020-05-25 09:19:15 | 2020-05-25 10:05:15 | 0:46:00 | 0:32:54 | 0:13:06 | smithi | master | ubuntu | 18.04 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{more-async-recovery.yaml} backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/osd-delay.yaml msgr/async.yaml objectstore/bluestore-hybrid.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} thrashers/mapgap.yaml thrashosds-health.yaml workloads/radosbench-high-concurrency.yaml} | 2 | |
fail | 5080098 | 2020-05-21 23:23:04 | 2020-05-25 09:19:19 | 2020-05-25 09:59:19 | 0:40:00 | 0:27:54 | 0:12:06 | smithi | master | ubuntu | 18.04 | rados/mgr/{clusters/{2-node-mgr.yaml} debug/mgr.yaml objectstore/bluestore-comp-zstd.yaml supported-random-distro$/{ubuntu_latest.yaml} tasks/module_selftest.yaml} | 2 | |
Failure Reason:
"2020-05-25T09:48:20.031621+0000 mds.c (mds.0) 1 : cluster [WRN] evicting unresponsive client smithi066:z (7176), after 302.129 seconds" in cluster log |
||||||||||||||
pass | 5080099 | 2020-05-21 23:23:05 | 2020-05-25 09:21:19 | 2020-05-25 09:37:19 | 0:16:00 | 0:06:11 | 0:09:49 | smithi | master | ubuntu | 18.04 | rados/cephadm/workunits/{distro/ubuntu_18.04_podman.yaml task/test_cephadm_repos.yaml} | 1 | |
pass | 5080100 | 2020-05-21 23:23:06 | 2020-05-25 09:21:19 | 2020-05-25 09:45:19 | 0:24:00 | 0:17:35 | 0:06:25 | smithi | master | centos | 8.1 | rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{centos_8.yaml} tasks/rados_stress_watch.yaml} | 2 | |
pass | 5080101 | 2020-05-21 23:23:07 | 2020-05-25 09:21:19 | 2020-05-25 10:01:20 | 0:40:01 | 0:27:45 | 0:12:16 | smithi | master | ubuntu | 18.04 | rados/thrash-erasure-code-big/{ceph.yaml cluster/{12-osds.yaml openstack.yaml} msgr-failures/fastclose.yaml objectstore/bluestore-comp-zlib.yaml rados.yaml recovery-overrides/{more-async-recovery.yaml} supported-random-distro$/{ubuntu_latest.yaml} thrashers/pggrow.yaml thrashosds-health.yaml workloads/ec-rados-plugin=jerasure-k=4-m=2.yaml} | 3 | |
pass | 5080102 | 2020-05-21 23:23:08 | 2020-05-25 09:21:19 | 2020-05-25 09:41:19 | 0:20:00 | 0:12:32 | 0:07:28 | smithi | master | centos | 8.1 | rados/verify/{centos_latest.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-thrash/default/{default.yaml thrashosds-health.yaml} msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore-comp-zlib.yaml rados.yaml tasks/mon_recovery.yaml validater/lockdep.yaml} | 2 | |
pass | 5080103 | 2020-05-21 23:23:09 | 2020-05-25 09:23:18 | 2020-05-25 10:01:18 | 0:38:00 | 0:27:32 | 0:10:28 | smithi | master | ubuntu | 18.04 | rados/thrash-erasure-code-isa/{arch/x86_64.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/fastclose.yaml objectstore/bluestore-comp-zlib.yaml rados.yaml recovery-overrides/{more-async-partial-recovery.yaml} supported-random-distro$/{ubuntu_latest.yaml} thrashers/pggrow.yaml thrashosds-health.yaml workloads/ec-rados-plugin=isa-k=2-m=1.yaml} | 2 |