Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
pass 5689712 2020-12-07 14:23:38 2020-12-07 14:24:59 2020-12-07 17:15:01 2:50:02 2:44:19 0:05:43 smithi master ubuntu 18.04 upgrade:nautilus-x/parallel/{0-cluster/{openstack start} 1-ceph-install/nautilus 1.1-pg-log-overrides/normal_pg_log 2-workload/{blogbench ec-rados-default rados_api rados_loadgenbig rgw_ragweed_prepare test_rbd_api test_rbd_python} 3-upgrade-sequence/upgrade-all 4-octopus 5-final-workload/{blogbench rados-snaps-few-objects rados_loadgenmix rados_mon_thrash rbd_cls rbd_import_export rgw rgw_ragweed_check} objectstore/bluestore-bitmap ubuntu_latest} 4
dead 5689713 2020-12-07 14:23:39 2020-12-07 14:24:59 2020-12-07 14:38:58 0:13:59 0:02:43 0:11:16 smithi master ubuntu 18.04 upgrade:nautilus-x/stress-split/{0-cluster/{openstack start} 1-ceph-install/nautilus 1.1-pg-log-overrides/normal_pg_log 2-partial-upgrade/firsthalf 3-thrash/default 4-workload/{radosbench rbd-cls rbd-import-export rbd_api readwrite rgw_ragweed_prepare snaps-few-objects} 5-finish-upgrade 6-octopus 7-msgr2 8-final-workload/{rbd-python snaps-many-objects} objectstore/bluestore-bitmap thrashosds-health ubuntu_latest} 5
Failure Reason:

{'smithi136.front.sepia.ceph.com': {'changed': False, 'msg': 'Data could not be sent to remote host "smithi136.front.sepia.ceph.com". Make sure this host can be reached over ssh: ssh: connect to host smithi136.front.sepia.ceph.com port 22: No route to host\r\n', 'unreachable': True}, 'smithi076.front.sepia.ceph.com': {'changed': False, 'msg': 'Data could not be sent to remote host "smithi076.front.sepia.ceph.com". Make sure this host can be reached over ssh: Warning: Permanently added \'smithi076.front.sepia.ceph.com,172.21.15.76\' (ECDSA) to the list of known hosts.\r\nubuntu@smithi076.front.sepia.ceph.com: Permission denied (publickey,password,keyboard-interactive).\r\n', 'unreachable': True}}

pass 5689714 2020-12-07 14:23:39 2020-12-07 14:25:01 2020-12-07 16:53:03 2:28:02 2:18:42 0:09:20 smithi master ubuntu 18.04 upgrade:nautilus-x/stress-split-erasure-code/{0-cluster/{openstack start} 1-nautilus-install/nautilus 1.1-pg-log-overrides/normal_pg_log 2-partial-upgrade/firsthalf 3-thrash/default 3.1-objectstore/bluestore-bitmap 4-ec-workload/{rados-ec-workload rbd-ec-workload} 5-finish-upgrade 6-octopus 7-final-workload thrashosds-health ubuntu_latest} 5
pass 5689715 2020-12-07 14:23:40 2020-12-07 14:26:56 2020-12-07 17:12:59 2:46:03 2:38:44 0:07:19 smithi master ubuntu 18.04 upgrade:nautilus-x/parallel/{0-cluster/{openstack start} 1-ceph-install/nautilus 1.1-pg-log-overrides/short_pg_log 2-workload/{blogbench ec-rados-default rados_api rados_loadgenbig rgw_ragweed_prepare test_rbd_api test_rbd_python} 3-upgrade-sequence/upgrade-mon-osd-mds 4-octopus 5-final-workload/{blogbench rados-snaps-few-objects rados_loadgenmix rados_mon_thrash rbd_cls rbd_import_export rgw rgw_ragweed_check} objectstore/filestore-xfs ubuntu_latest} 4
pass 5689716 2020-12-07 14:23:41 2020-12-07 14:26:57 2020-12-07 17:18:59 2:52:02 2:43:19 0:08:43 smithi master ubuntu 18.04 upgrade:nautilus-x/parallel/{0-cluster/{openstack start} 1-ceph-install/nautilus 1.1-pg-log-overrides/normal_pg_log 2-workload/{blogbench ec-rados-default rados_api rados_loadgenbig rgw_ragweed_prepare test_rbd_api test_rbd_python} 3-upgrade-sequence/upgrade-mon-osd-mds 4-octopus 5-final-workload/{blogbench rados-snaps-few-objects rados_loadgenmix rados_mon_thrash rbd_cls rbd_import_export rgw rgw_ragweed_check} objectstore/bluestore-bitmap ubuntu_latest} 4
fail 5689717 2020-12-07 14:23:42 2020-12-07 14:27:17 2020-12-07 18:01:21 3:34:04 3:26:15 0:07:49 smithi master ubuntu 18.04 upgrade:nautilus-x/stress-split/{0-cluster/{openstack start} 1-ceph-install/nautilus 1.1-pg-log-overrides/short_pg_log 2-partial-upgrade/firsthalf 3-thrash/default 4-workload/{radosbench rbd-cls rbd-import-export rbd_api readwrite rgw_ragweed_prepare snaps-few-objects} 5-finish-upgrade 6-octopus 7-msgr2 8-final-workload/{rbd-python snaps-many-objects} objectstore/filestore-xfs thrashosds-health ubuntu_latest} 5
Failure Reason:

"2020-12-07T14:44:16.138203+0000 mon.c (mon.0) 281 : cluster [WRN] Health detail: HEALTH_WARN 1/3 mons down, quorum c,a" in cluster log

fail 5689718 2020-12-07 14:23:43 2020-12-07 14:27:43 2020-12-07 14:35:42 0:07:59 smithi master ubuntu 18.04 upgrade:nautilus-x/stress-split-erasure-code/{0-cluster/{openstack start} 1-nautilus-install/nautilus 1.1-pg-log-overrides/short_pg_log 2-partial-upgrade/firsthalf 3-thrash/default 3.1-objectstore/filestore-xfs 4-ec-workload/{rados-ec-workload rbd-ec-workload} 5-finish-upgrade 6-octopus 7-final-workload thrashosds-health ubuntu_latest} 5
Failure Reason:

Stale jobs detected, aborting.

pass 5689719 2020-12-07 14:23:43 2020-12-07 14:28:57 2020-12-07 17:21:00 2:52:03 2:38:50 0:13:13 smithi master ubuntu 18.04 upgrade:nautilus-x/parallel/{0-cluster/{openstack start} 1-ceph-install/nautilus 1.1-pg-log-overrides/short_pg_log 2-workload/{blogbench ec-rados-default rados_api rados_loadgenbig rgw_ragweed_prepare test_rbd_api test_rbd_python} 3-upgrade-sequence/upgrade-all 4-octopus 5-final-workload/{blogbench rados-snaps-few-objects rados_loadgenmix rados_mon_thrash rbd_cls rbd_import_export rgw rgw_ragweed_check} objectstore/filestore-xfs ubuntu_latest} 4
dead 5689720 2020-12-07 14:23:44 2020-12-07 14:28:58 2020-12-07 14:48:57 0:19:59 0:03:12 0:16:47 smithi master ubuntu 18.04 upgrade:nautilus-x/parallel/{0-cluster/{openstack start} 1-ceph-install/nautilus 1.1-pg-log-overrides/short_pg_log 2-workload/{blogbench ec-rados-default rados_api rados_loadgenbig rgw_ragweed_prepare test_rbd_api test_rbd_python} 3-upgrade-sequence/upgrade-all 4-octopus 5-final-workload/{blogbench rados-snaps-few-objects rados_loadgenmix rados_mon_thrash rbd_cls rbd_import_export rgw rgw_ragweed_check} objectstore/bluestore-bitmap ubuntu_latest} 2
Failure Reason:

SSH connection to smithi136 was lost: 'sudo systemctl stop ntp.service || sudo systemctl stop ntpd.service || sudo systemctl stop chronyd.service ; sudo ntpd -gq || sudo chronyc makestep ; sudo systemctl start ntp.service || sudo systemctl start ntpd.service || sudo systemctl start chronyd.service ; PATH=/usr/bin:/usr/sbin ntpq -p || PATH=/usr/bin:/usr/sbin chronyc sources || true'

dead 5689721 2020-12-07 14:23:45 2020-12-07 14:28:57 2020-12-07 14:46:57 0:18:00 0:01:58 0:16:02 smithi master ubuntu 18.04 upgrade:nautilus-x/stress-split/{0-cluster/{openstack start} 1-ceph-install/nautilus 1.1-pg-log-overrides/short_pg_log 2-partial-upgrade/firsthalf 3-thrash/default 4-workload/{radosbench rbd-cls rbd-import-export rbd_api readwrite rgw_ragweed_prepare snaps-few-objects} 5-finish-upgrade 6-octopus 7-msgr2 8-final-workload/{rbd-python snaps-many-objects} objectstore/bluestore-bitmap thrashosds-health ubuntu_latest} 5
Failure Reason:

Failure object was: {'smithi076.front.sepia.ceph.com': {'changed': True, 'end': '2020-12-07 14:44:21.569448', 'stdout': '', 'cmd': ['sudo', 'apt-get', 'clean'], 'delta': '0:00:00.034461', 'stderr': 'E: Could not get lock /var/cache/apt/archives/lock - open (11: Resource temporarily unavailable)\nE: Unable to lock directory /var/cache/apt/archives/', 'rc': 100, 'invocation': {'module_args': {'creates': None, 'executable': None, '_uses_shell': False, 'strip_empty_ends': True, '_raw_params': 'sudo apt-get clean', 'removes': None, 'argv': None, 'warn': True, 'chdir': None, 'stdin_add_newline': True, 'stdin': None}}, 'start': '2020-12-07 14:44:21.534987', 'warnings': ["Consider using 'become', 'become_method', and 'become_user' rather than running sudo"], 'msg': 'non-zero return code', 'stdout_lines': [], 'stderr_lines': ['E: Could not get lock /var/cache/apt/archives/lock - open (11: Resource temporarily unavailable)', 'E: Unable to lock directory /var/cache/apt/archives/'], '_ansible_no_log': False}}Traceback (most recent call last): File "/home/teuthworker/src/git.ceph.com_git_ceph-cm-ansible_master/callback_plugins/failure_log.py", line 44, in log_failure log.error(yaml.safe_dump(failure)) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/__init__.py", line 306, in safe_dump return dump_all([data], stream, Dumper=SafeDumper, **kwds) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/__init__.py", line 278, in dump_all dumper.represent(data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 27, in represent node = self.represent_data(data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 48, in represent_data node = self.yaml_representers[data_types[0]](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 207, in represent_dict return self.represent_mapping('tag:yaml.org,2002:map', data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 118, in represent_mapping node_value = self.represent_data(item_value) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 48, in represent_data node = self.yaml_representers[data_types[0]](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 207, in represent_dict return self.represent_mapping('tag:yaml.org,2002:map', data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 118, in represent_mapping node_value = self.represent_data(item_value) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 48, in represent_data node = self.yaml_representers[data_types[0]](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 199, in represent_list return self.represent_sequence('tag:yaml.org,2002:seq', data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 92, in represent_sequence node_item = self.represent_data(item) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 58, in represent_data node = self.yaml_representers[None](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 231, in represent_undefined raise RepresenterError("cannot represent an object", data)yaml.representer.RepresenterError: ('cannot represent an object', 'sudo')Failure object was: {'smithi136.front.sepia.ceph.com': {'cache_updated': False, 'stdout': '', 'stderr': 'W: --force-yes is deprecated, use one of the options starting with --allow instead.\nE: Could not get lock /var/lib/dpkg/lock-frontend - open (11: Resource temporarily unavailable)\nE: Unable to acquire the dpkg frontend lock (/var/lib/dpkg/lock-frontend), is another process using it?\n', 'rc': 100, 'invocation': {'module_args': {'dpkg_options': 'force-confdef,force-confold', 'autoremove': False, 'force': True, 'force_apt_get': False, 'policy_rc_d': None, 'package': ['mpich', 'qemu-system-x86', 'python-virtualenv', 'python-configobj', 'python-gevent', 'python-numpy', 'python-matplotlib', 'python-nose', 'btrfs-tools', 'lttng-tools', 'libtool-bin', 'docker.io', 'python3-nose', 'libfcgi0ldbl', 'python-dev', 'libev-dev', 'perl', 'libwww-perl', 'lsb-release', 'build-essential', 'sysstat', 'gdb', 'libedit2', 'cryptsetup-bin', 'xfsprogs', 'gdisk', 'parted', 'libuuid1', 'libatomic-ops-dev', 'git-core', 'attr', 'dbench', 'bonnie++', 'valgrind', 'ant', 'libtool', 'automake', 'gettext', 'uuid-dev', 'libacl1-dev', 'bc', 'xfsdump', 'xfslibs-dev', 'libattr1-dev', 'quota', 'libcap2-bin', 'libncurses5-dev', 'lvm2', 'vim', 'pdsh', 'collectl', 'blktrace', 'genisoimage', 'libjson-xs-perl', 'xml-twig-tools', 'default-jdk', 'junit4', 'tgt', 'open-iscsi', 'cifs-utils', 'ipcalc', 'nfs-common', 'nfs-kernel-server', 'software-properties-common'], 'autoclean': False, 'install_recommends': None, 'name': ['mpich', 'qemu-system-x86', 'python-virtualenv', 'python-configobj', 'python-gevent', 'python-numpy', 'python-matplotlib', 'python-nose', 'btrfs-tools', 'lttng-tools', 'libtool-bin', 'docker.io', 'python3-nose', 'libfcgi0ldbl', 'python-dev', 'libev-dev', 'perl', 'libwww-perl', 'lsb-release', 'build-essential', 'sysstat', 'gdb', 'libedit2', 'cryptsetup-bin', 'xfsprogs', 'gdisk', 'parted', 'libuuid1', 'libatomic-ops-dev', 'git-core', 'attr', 'dbench', 'bonnie++', 'valgrind', 'ant', 'libtool', 'automake', 'gettext', 'uuid-dev', 'libacl1-dev', 'bc', 'xfsdump', 'xfslibs-dev', 'libattr1-dev', 'quota', 'libcap2-bin', 'libncurses5-dev', 'lvm2', 'vim', 'pdsh', 'collectl', 'blktrace', 'genisoimage', 'libjson-xs-perl', 'xml-twig-tools', 'default-jdk', 'junit4', 'tgt', 'open-iscsi', 'cifs-utils', 'ipcalc', 'nfs-common', 'nfs-kernel-server', 'software-properties-common'], 'purge': False, 'allow_unauthenticated': False, 'state': 'present', 'upgrade': None, 'update_cache': None, 'default_release': None, 'only_upgrade': False, 'deb': None, 'cache_valid_time': 0}}, 'cache_update_time': 1607352204, 'msg': '\'/usr/bin/apt-get -y -o "Dpkg::Options::=--force-confdef" -o "Dpkg::Options::=--force-confold" --force-yes install \'qemu-system-x86\' \'git-core\'\' failed: W: --force-yes is deprecated, use one of the options starting with --allow instead.\nE: Could not get lock /var/lib/dpkg/lock-frontend - open (11: Resource temporarily unavailable)\nE: Unable to acquire the dpkg frontend lock (/var/lib/dpkg/lock-frontend), is another process using it?\n', 'stdout_lines': [], 'stderr_lines': ['W: --force-yes is deprecated, use one of the options starting with --allow instead.', 'E: Could not get lock /var/lib/dpkg/lock-frontend - open (11: Resource temporarily unavailable)', 'E: Unable to acquire the dpkg frontend lock (/var/lib/dpkg/lock-frontend), is another process using it?'], '_ansible_no_log': False, 'changed': False}}Traceback (most recent call last): File "/home/teuthworker/src/git.ceph.com_git_ceph-cm-ansible_master/callback_plugins/failure_log.py", line 44, in log_failure log.error(yaml.safe_dump(failure)) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/__init__.py", line 306, in safe_dump return dump_all([data], stream, Dumper=SafeDumper, **kwds) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/__init__.py", line 278, in dump_all dumper.represent(data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 27, in represent node = self.represent_data(data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 48, in represent_data node = self.yaml_representers[data_types[0]](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 207, in represent_dict return self.represent_mapping('tag:yaml.org,2002:map', data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 118, in represent_mapping node_value = self.represent_data(item_value) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 48, in represent_data node = self.yaml_representers[data_types[0]](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 207, in represent_dict return self.represent_mapping('tag:yaml.org,2002:map', data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 118, in represent_mapping node_value = self.represent_data(item_value) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 48, in represent_data node = self.yaml_representers[data_types[0]](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 207, in represent_dict return self.represent_mapping('tag:yaml.org,2002:map', data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 118, in represent_mapping node_value = self.represent_data(item_value) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 48, in represent_data node = self.yaml_representers[data_types[0]](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 207, in represent_dict return self.represent_mapping('tag:yaml.org,2002:map', data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 118, in represent_mapping node_value = self.represent_data(item_value) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 58, in represent_data node = self.yaml_representers[None](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 231, in represent_undefined raise RepresenterError("cannot represent an object", data)yaml.representer.RepresenterError: ('cannot represent an object', 'force-confdef,force-confold')

fail 5689722 2020-12-07 14:23:46 2020-12-07 14:28:57 2020-12-07 16:42:59 2:14:02 2:06:16 0:07:46 smithi master ubuntu 18.04 upgrade:nautilus-x/stress-split-erasure-code/{0-cluster/{openstack start} 1-nautilus-install/nautilus 1.1-pg-log-overrides/short_pg_log 2-partial-upgrade/firsthalf 3-thrash/default 3.1-objectstore/bluestore-bitmap 4-ec-workload/{rados-ec-workload rbd-ec-workload} 5-finish-upgrade 6-octopus 7-final-workload thrashosds-health ubuntu_latest} 5
Failure Reason:

"2020-12-07T14:46:14.255632+0000 mon.c (mon.0) 268 : cluster [WRN] Health detail: HEALTH_WARN 1/3 mons down, quorum c,a" in cluster log

pass 5689723 2020-12-07 14:23:47 2020-12-07 14:28:58 2020-12-07 17:27:01 2:58:03 2:37:42 0:20:21 smithi master ubuntu 18.04 upgrade:nautilus-x/parallel/{0-cluster/{openstack start} 1-ceph-install/nautilus 1.1-pg-log-overrides/normal_pg_log 2-workload/{blogbench ec-rados-default rados_api rados_loadgenbig rgw_ragweed_prepare test_rbd_api test_rbd_python} 3-upgrade-sequence/upgrade-mon-osd-mds 4-octopus 5-final-workload/{blogbench rados-snaps-few-objects rados_loadgenmix rados_mon_thrash rbd_cls rbd_import_export rgw rgw_ragweed_check} objectstore/filestore-xfs ubuntu_latest} 4
pass 5689724 2020-12-07 14:23:48 2020-12-07 14:30:51 2020-12-07 17:28:54 2:58:03 2:42:52 0:15:11 smithi master ubuntu 18.04 upgrade:nautilus-x/parallel/{0-cluster/{openstack start} 1-ceph-install/nautilus 1.1-pg-log-overrides/short_pg_log 2-workload/{blogbench ec-rados-default rados_api rados_loadgenbig rgw_ragweed_prepare test_rbd_api test_rbd_python} 3-upgrade-sequence/upgrade-mon-osd-mds 4-octopus 5-final-workload/{blogbench rados-snaps-few-objects rados_loadgenmix rados_mon_thrash rbd_cls rbd_import_export rgw rgw_ragweed_check} objectstore/bluestore-bitmap ubuntu_latest} 4
fail 5689725 2020-12-07 14:23:48 2020-12-07 14:30:51 2020-12-07 21:20:59 6:50:08 3:14:26 3:35:42 smithi master ubuntu 18.04 upgrade:nautilus-x/stress-split/{0-cluster/{openstack start} 1-ceph-install/nautilus 1.1-pg-log-overrides/normal_pg_log 2-partial-upgrade/firsthalf 3-thrash/default 4-workload/{radosbench rbd-cls rbd-import-export rbd_api readwrite rgw_ragweed_prepare snaps-few-objects} 5-finish-upgrade 6-octopus 7-msgr2 8-final-workload/{rbd-python snaps-many-objects} objectstore/filestore-xfs thrashosds-health ubuntu_latest} 5
Failure Reason:

"2020-12-07T18:17:28.858218+0000 mon.c (mon.0) 274 : cluster [WRN] Health detail: HEALTH_WARN 1/3 mons down, quorum c,a" in cluster log

pass 5689726 2020-12-07 14:23:49 2020-12-07 14:30:51 2020-12-07 20:04:57 5:34:06 1:56:37 3:37:29 smithi master ubuntu 18.04 upgrade:nautilus-x/stress-split-erasure-code/{0-cluster/{openstack start} 1-nautilus-install/nautilus 1.1-pg-log-overrides/normal_pg_log 2-partial-upgrade/firsthalf 3-thrash/default 3.1-objectstore/filestore-xfs 4-ec-workload/{rados-ec-workload rbd-ec-workload} 5-finish-upgrade 6-octopus 7-final-workload thrashosds-health ubuntu_latest} 5
pass 5689727 2020-12-07 14:23:50 2020-12-07 14:30:51 2020-12-07 18:04:55 3:34:04 2:39:25 0:54:39 smithi master ubuntu 18.04 upgrade:nautilus-x/parallel/{0-cluster/{openstack start} 1-ceph-install/nautilus 1.1-pg-log-overrides/normal_pg_log 2-workload/{blogbench ec-rados-default rados_api rados_loadgenbig rgw_ragweed_prepare test_rbd_api test_rbd_python} 3-upgrade-sequence/upgrade-all 4-octopus 5-final-workload/{blogbench rados-snaps-few-objects rados_loadgenmix rados_mon_thrash rbd_cls rbd_import_export rgw rgw_ragweed_check} objectstore/filestore-xfs ubuntu_latest} 4