User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Fail | Dead |
---|---|---|---|---|---|---|---|---|---|---|
lflores | 2022-10-11 20:28:04 | 2022-10-11 20:31:12 | 2022-10-12 08:40:52 | 12:09:40 | rados | wip-prim-balance-score | smithi | 55ce98e | 7 | 3 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
fail | 7063168 | 2022-10-11 20:29:25 | 2022-10-11 20:31:12 | 2022-10-11 20:44:22 | 0:13:10 | 0:04:35 | 0:08:35 | smithi | main | ubuntu | 20.04 | rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 0-nvme-loop 1-rook 2-workload/radosbench cluster/1-node k8s/1.21 net/calico rook/1.7.2} | 1 | |
Failure Reason:
Command failed on smithi079 with status 1: 'kubectl create -f rook/cluster/examples/kubernetes/ceph/crds.yaml -f rook/cluster/examples/kubernetes/ceph/common.yaml -f operator.yaml' |
||||||||||||||
fail | 7063169 | 2022-10-11 20:29:26 | 2022-10-11 20:31:12 | 2022-10-11 21:21:29 | 0:50:17 | 0:38:44 | 0:11:33 | smithi | main | ubuntu | 20.04 | rados/upgrade/parallel/{0-random-distro$/{ubuntu_20.04} 0-start 1-tasks mon_election/classic upgrade-sequence workload/{ec-rados-default rados_api rados_loadgenbig rbd_import_export test_rbd_api test_rbd_python}} | 2 | |
Failure Reason:
'pool' |
||||||||||||||
dead | 7063170 | 2022-10-11 20:29:27 | 2022-10-11 20:32:03 | 2022-10-12 08:40:52 | 12:08:49 | smithi | main | ubuntu | 20.04 | rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/few objectstore/bluestore-comp-lz4 rados recovery-overrides/{more-async-recovery} supported-random-distro$/{ubuntu_latest} thrashers/pggrow thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 7063171 | 2022-10-11 20:29:28 | 2022-10-11 20:32:23 | 2022-10-11 21:08:36 | 0:36:13 | 0:24:39 | 0:11:34 | smithi | main | ubuntu | 20.04 | rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 0-nvme-loop 1-rook 2-workload/none cluster/3-node k8s/1.21 net/flannel rook/master} | 3 | |
Failure Reason:
'check osd count' reached maximum tries (90) after waiting for 900 seconds |
||||||||||||||
dead | 7063172 | 2022-10-11 20:29:29 | 2022-10-11 20:33:03 | 2022-10-11 20:44:39 | 0:11:36 | 0:03:48 | 0:07:48 | smithi | main | rhel | 8.6 | rados/singleton-nomsgr/{all/lazy_omap_stats_output mon_election/classic rados supported-random-distro$/{rhel_8}} | 1 | |
Failure Reason:
{'smithi179.front.sepia.ceph.com': {'_ansible_no_log': False, 'changed': False, 'invocation': {'module_args': {'allow_downgrade': False, 'allowerasing': False, 'autoremove': False, 'bugfix': False, 'conf_file': None, 'disable_excludes': None, 'disable_gpg_check': False, 'disable_plugin': [], 'disablerepo': [], 'download_dir': None, 'download_only': False, 'enable_plugin': [], 'enablerepo': [], 'exclude': [], 'install_repoquery': True, 'install_weak_deps': True, 'installroot': '/', 'list': None, 'lock_timeout': 30, 'name': ['krb5-workstation'], 'releasever': None, 'security': False, 'skip_broken': False, 'state': 'present', 'update_cache': False, 'update_only': False, 'validate_certs': True}}, 'msg': "Loading repository 'rhel-8-for-x86_64-baseos-rpms' has failed", 'rc': 1, 'results': []}} |
||||||||||||||
fail | 7063173 | 2022-10-11 20:29:31 | 2022-10-11 20:33:44 | 2022-10-11 20:49:07 | 0:15:23 | 0:04:36 | 0:10:47 | smithi | main | ubuntu | 20.04 | rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 0-nvme-loop 1-rook 2-workload/radosbench cluster/1-node k8s/1.21 net/host rook/1.7.2} | 1 | |
Failure Reason:
Command failed on smithi062 with status 1: 'kubectl create -f rook/cluster/examples/kubernetes/ceph/crds.yaml -f rook/cluster/examples/kubernetes/ceph/common.yaml -f operator.yaml' |
||||||||||||||
fail | 7063174 | 2022-10-11 20:29:32 | 2022-10-11 20:33:44 | 2022-10-11 21:13:51 | 0:40:07 | 0:29:14 | 0:10:53 | smithi | main | centos | 8.stream | rados/upgrade/parallel/{0-random-distro$/{centos_8.stream_container_tools} 0-start 1-tasks mon_election/connectivity upgrade-sequence workload/{ec-rados-default rados_api rados_loadgenbig rbd_import_export test_rbd_api test_rbd_python}} | 2 | |
Failure Reason:
'pool' |
||||||||||||||
fail | 7063175 | 2022-10-11 20:29:33 | 2022-10-11 20:36:15 | 2022-10-11 21:16:20 | 0:40:05 | 0:31:16 | 0:08:49 | smithi | main | centos | 8.stream | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-comp-zstd rados tasks/rados_cls_all validater/valgrind} | 2 | |
Failure Reason:
"1665522533.2504487 mgr.x (mgr.4108) 22 : cluster [ERR] Unhandled exception from module 'pg_autoscaler' while running on mgr.x: ('pool',)" in cluster log |
||||||||||||||
dead | 7063176 | 2022-10-11 20:29:34 | 2022-10-11 20:38:15 | 2022-10-11 21:04:57 | 0:26:42 | 0:15:32 | 0:11:10 | smithi | main | ubuntu | 20.04 | rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 0-nvme-loop 1-rook 2-workload/none cluster/3-node k8s/1.21 net/calico rook/master} | 3 | |
Failure Reason:
{'smithi017.front.sepia.ceph.com': {'_ansible_no_log': False, 'attempts': 24, 'changed': False, 'invocation': {'module_args': {'allow_unauthenticated': False, 'autoclean': False, 'autoremove': False, 'cache_valid_time': 0, 'deb': None, 'default_release': None, 'dpkg_options': 'force-confdef,force-confold', 'force': False, 'force_apt_get': False, 'install_recommends': None, 'only_upgrade': False, 'package': None, 'policy_rc_d': None, 'purge': False, 'state': 'present', 'update_cache': True, 'update_cache_retries': 5, 'update_cache_retry_max_delay': 12, 'upgrade': None}}, 'msg': 'Failed to update apt cache: unknown reason'}} |
||||||||||||||
fail | 7063177 | 2022-10-11 20:29:35 | 2022-10-11 20:38:36 | 2022-10-11 20:56:15 | 0:17:39 | 0:07:51 | 0:09:48 | smithi | main | ubuntu | 20.04 | rados/singleton-nomsgr/{all/lazy_omap_stats_output mon_election/connectivity rados supported-random-distro$/{ubuntu_latest}} | 1 | |
Failure Reason:
"1665521575.2801657 mgr.x (mgr.4101) 88 : cluster [ERR] Unhandled exception from module 'balancer' while running on mgr.x: ('pool',)" in cluster log |