Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
fail 6162140 2021-06-09 09:21:16 2021-06-09 10:56:26 2021-06-09 11:23:44 0:27:18 0:17:36 0:09:42 smithi master centos 8.2 rados/cephadm/upgrade/{1-start-distro/1-start-centos_8 2-repo_digest/defaut 3-start-upgrade 4-wait fixed-2} 2
Failure Reason:

reached maximum tries (180) after waiting for 180 seconds

fail 6162141 2021-06-09 09:21:17 2021-06-09 10:56:26 2021-06-09 11:10:03 0:13:37 0:03:36 0:10:01 smithi master ubuntu 20.04 rados/cephadm/workunits/{0-distro/ubuntu_20.04_kubic_stable task/test_orch_cli} 1
Failure Reason:

Command failed on smithi171 with status 1: "sudo TESTDIR=/home/ubuntu/cephtest bash -c 'sudo cp /etc/containers/registries.conf /etc/containers/registries.conf.backup'"

fail 6162142 2021-06-09 09:21:18 2021-06-09 10:56:36 2021-06-09 11:18:05 0:21:29 0:10:24 0:11:05 smithi master ubuntu 20.04 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{default} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/upmap msgr-failures/fastclose msgr/async objectstore/bluestore-hybrid rados supported-random-distro$/{ubuntu_latest} thrashers/none thrashosds-health workloads/cache-agent-small} 2
Failure Reason:

"2021-06-09T11:12:23.980890+0000 mgr.y (mgr.4110) 1 : cluster [ERR] Failed to load ceph-mgr modules: rook" in cluster log

fail 6162143 2021-06-09 09:21:19 2021-06-09 10:56:46 2021-06-09 11:10:03 0:13:17 0:03:46 0:09:31 smithi master ubuntu 20.04 rados/cephadm/with-work/{0-distro/rhel_8.3_kubic_stable distro/ubuntu_20.04 fixed-2 mode/root msgr/async-v1only start tasks/rados_python} 2
Failure Reason:

Command failed on smithi027 with status 1: "sudo TESTDIR=/home/ubuntu/cephtest bash -c 'sudo cp /etc/containers/registries.conf /etc/containers/registries.conf.backup'"

pass 6162144 2021-06-09 09:21:20 2021-06-09 10:56:47 2021-06-09 11:29:52 0:33:05 0:20:06 0:12:59 smithi master ubuntu 20.04 rados/thrash-erasure-code-overwrites/{bluestore-bitmap ceph clusters/{fixed-2 openstack} fast/fast msgr-failures/osd-delay rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{ubuntu_latest} thrashers/pggrow thrashosds-health workloads/ec-small-objects-fast-read-overwrites} 2
dead 6162145 2021-06-09 09:21:21 2021-06-09 10:57:07 2021-06-09 11:14:00 0:16:53 0:04:00 0:12:53 smithi master ubuntu 20.04 rados/cephadm/upgrade/{1-start-distro/1-start-ubuntu_20.04 2-repo_digest/repo_digest 3-start-upgrade 4-wait fixed-2} 2
Failure Reason:

{'Failure object was': {'smithi047.front.sepia.ceph.com': {'msg': 'Failed to update apt cache: ', 'invocation': {'module_args': {'dpkg_options': 'force-confdef,force-confold', 'autoremove': False, 'force': False, 'force_apt_get': False, 'policy_rc_d': 'None', 'package': 'None', 'autoclean': False, 'install_recommends': 'None', 'purge': False, 'allow_unauthenticated': False, 'state': 'present', 'upgrade': 'None', 'update_cache': True, 'default_release': 'None', 'only_upgrade': False, 'deb': 'None', 'cache_valid_time': 0}}, '_ansible_no_log': False, 'attempts': 24, 'changed': False}}, 'Traceback (most recent call last)': 'File "/home/teuthworker/src/git.ceph.com_git_ceph-cm-ansible_master/callback_plugins/failure_log.py", line 44, in log_failure log.error(yaml.safe_dump(failure)) File "/home/teuthworker/src/git.ceph.com_git_teuthology_f359b10daba6e0103d42ccfc021bc797f3cd7edc/virtualenv/lib/python3.6/site-packages/yaml/__init__.py", line 306, in safe_dump return dump_all([data], stream, Dumper=SafeDumper, **kwds) File "/home/teuthworker/src/git.ceph.com_git_teuthology_f359b10daba6e0103d42ccfc021bc797f3cd7edc/virtualenv/lib/python3.6/site-packages/yaml/__init__.py", line 278, in dump_all dumper.represent(data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_f359b10daba6e0103d42ccfc021bc797f3cd7edc/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 27, in represent node = self.represent_data(data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_f359b10daba6e0103d42ccfc021bc797f3cd7edc/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 48, in represent_data node = self.yaml_representers[data_types[0]](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_f359b10daba6e0103d42ccfc021bc797f3cd7edc/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 207, in represent_dict return self.represent_mapping(\'tag:yaml.org,2002:map\', data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_f359b10daba6e0103d42ccfc021bc797f3cd7edc/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 118, in represent_mapping node_value = self.represent_data(item_value) File "/home/teuthworker/src/git.ceph.com_git_teuthology_f359b10daba6e0103d42ccfc021bc797f3cd7edc/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 48, in represent_data node = self.yaml_representers[data_types[0]](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_f359b10daba6e0103d42ccfc021bc797f3cd7edc/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 207, in represent_dict return self.represent_mapping(\'tag:yaml.org,2002:map\', data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_f359b10daba6e0103d42ccfc021bc797f3cd7edc/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 117, in represent_mapping node_key = self.represent_data(item_key) File "/home/teuthworker/src/git.ceph.com_git_teuthology_f359b10daba6e0103d42ccfc021bc797f3cd7edc/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 58, in represent_data node = self.yaml_representers[None](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_f359b10daba6e0103d42ccfc021bc797f3cd7edc/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 231, in represent_undefined raise RepresenterError("cannot represent an object", data)', 'yaml.representer.RepresenterError': "('cannot represent an object', '_ansible_no_log')"}

fail 6162146 2021-06-09 09:21:22 2021-06-09 10:58:47 2021-06-09 11:33:16 0:34:29 0:23:43 0:10:46 smithi master ubuntu 20.04 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat msgr-failures/few msgr/async-v1only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{ubuntu_latest} thrashers/pggrow thrashosds-health workloads/cache-pool-snaps-readproxy} 2
Failure Reason:

"2021-06-09T11:15:31.962631+0000 mgr.y (mgr.4107) 1 : cluster [ERR] Failed to load ceph-mgr modules: rook" in cluster log

fail 6162147 2021-06-09 09:21:23 2021-06-09 10:59:58 2021-06-09 11:31:24 0:31:26 0:21:45 0:09:41 smithi master ubuntu 20.04 rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/fast msgr-failures/few objectstore/bluestore-comp-zlib rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{ubuntu_latest} thrashers/minsize_recovery thrashosds-health workloads/ec-small-objects-many-deletes} 2
Failure Reason:

"2021-06-09T11:15:39.808316+0000 mgr.y (mgr.4112) 1 : cluster [ERR] Failed to load ceph-mgr modules: rook" in cluster log

fail 6162148 2021-06-09 09:21:24 2021-06-09 11:00:10 2021-06-09 11:02:09 0:01:59 0 smithi master ubuntu 20.04 rados/cephadm/thrash/0-distro/ubuntu_20.04_kubic_stable
Failure Reason:

list index out of range

pass 6162149 2021-06-09 09:21:25 2021-06-09 11:00:08 2021-06-09 11:21:41 0:21:33 0:11:35 0:09:58 smithi master ubuntu 20.04 rados/mgr/{clusters/{2-node-mgr} debug/mgr objectstore/bluestore-comp-snappy supported-random-distro$/{ubuntu_latest} tasks/failover} 2
fail 6162150 2021-06-09 09:21:26 2021-06-09 11:00:08 2021-06-09 11:35:11 0:35:03 0:24:37 0:10:26 smithi master ubuntu 20.04 rados/dashboard/{clusters/{2-node-mgr} debug/mgr objectstore/bluestore-stupid supported-random-distro$/{ubuntu_latest} tasks/dashboard} 2
Failure Reason:

Test failure: test_host_devices (tasks.mgr.dashboard.test_host.HostControllerTest)

fail 6162151 2021-06-09 09:21:27 2021-06-09 11:00:09 2021-06-09 11:14:11 0:14:02 0:03:37 0:10:25 smithi master ubuntu 20.04 rados/cephadm/workunits/{0-distro/ubuntu_20.04_kubic_testing task/test_adoption} 1
Failure Reason:

Command failed on smithi049 with status 1: "sudo TESTDIR=/home/ubuntu/cephtest bash -c 'sudo cp /etc/containers/registries.conf /etc/containers/registries.conf.backup'"