Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
pass 7256043 2023-04-27 14:39:46 2023-04-27 14:42:12 2023-04-27 16:58:19 2:16:07 1:46:06 0:30:01 smithi main centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-inline/no 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
fail 7256044 2023-04-27 14:39:47 2023-04-27 14:43:02 2023-04-27 15:02:10 0:19:08 0:06:22 0:12:46 smithi main ubuntu 20.04 rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 1-rook 2-workload/radosbench 3-final cluster/1-node k8s/1.21 net/calico rook/master} 1
Failure Reason:

Command failed on smithi039 with status 1: 'sudo systemctl enable --now kubelet && sudo kubeadm config images pull'

fail 7256045 2023-04-27 14:39:47 2023-04-27 14:43:23 2023-04-27 16:26:50 1:43:27 1:13:39 0:29:48 smithi main centos 8.stream rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{centos_8} tasks/rados_cls_all} 2
Failure Reason:

"2023-04-27T16:22:37.808361+0000 mon.a (mon.0) 475 : cluster [WRN] Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)" in cluster log

fail 7256046 2023-04-27 14:39:48 2023-04-27 14:43:23 2023-04-27 17:10:48 2:27:25 1:58:40 0:28:45 smithi main centos 8.stream rados/cephadm/dashboard/{0-distro/centos_8.stream_container_tools task/test_e2e} 2
Failure Reason:

Command failed (workunit test cephadm/test_dashboard_e2e.sh) on smithi073 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=5b35a461e1c1b935b2b3fc7c43d68a58c1a41547 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_dashboard_e2e.sh'

pass 7256047 2023-04-27 14:39:49 2023-04-27 14:43:23 2023-04-27 16:11:28 1:28:05 0:58:37 0:29:28 smithi main centos 8.stream rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.stream_container_tools 1-bootstrap/16.2.4 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
pass 7256048 2023-04-27 14:39:50 2023-04-27 14:44:24 2023-04-27 16:45:41 2:01:17 1:29:54 0:31:23 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/connectivity task/test_cephadm} 1
pass 7256049 2023-04-27 14:39:51 2023-04-27 14:46:34 2023-04-27 16:12:02 1:25:28 0:56:08 0:29:20 smithi main centos 8.stream rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.stream_container_tools 1-bootstrap/16.2.5 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
pass 7256050 2023-04-27 14:39:51 2023-04-27 14:46:35 2023-04-27 16:51:07 2:04:32 1:32:43 0:31:49 smithi main centos 8.stream rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-comp-lz4 rados tasks/rados_cls_all validater/lockdep} 2
fail 7256051 2023-04-27 14:39:52 2023-04-27 14:47:25 2023-04-27 15:07:51 0:20:26 0:07:57 0:12:29 smithi main ubuntu 18.04 rados/rook/smoke/{0-distro/ubuntu_18.04 0-kubeadm 1-rook 2-workload/radosbench 3-final cluster/1-node k8s/1.21 net/calico rook/1.6.2} 1
Failure Reason:

Command failed on smithi118 with status 1: 'sudo systemctl enable --now kubelet && sudo kubeadm config images pull'

fail 7256052 2023-04-27 14:39:53 2023-04-27 14:47:46 2023-04-27 15:08:03 0:20:17 0:09:12 0:11:05 smithi main rados/cephadm/dashboard/{0-distro/ignorelist_health task/test_e2e} 2
Failure Reason:

Failed to fetch package version from https://shaman.ceph.com/api/search/?status=ready&project=ceph&flavor=default&distros=ubuntu%2F22.04%2Fx86_64&sha1=5b35a461e1c1b935b2b3fc7c43d68a58c1a41547

dead 7256053 2023-04-27 14:39:54 2023-04-27 14:47:56 2023-04-28 02:58:07 12:10:11 smithi main centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-inline/no 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

hit max job timeout

fail 7256054 2023-04-27 14:39:54 2023-04-27 14:49:36 2023-04-27 15:17:32 0:27:56 0:18:26 0:09:30 smithi main ubuntu 18.04 rados/cephadm/osds/{0-distro/ubuntu_18.04 0-nvme-loop 1-start 2-ops/rm-zap-add} 2
Failure Reason:

Command failed on smithi090 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:5b35a461e1c1b935b2b3fc7c43d68a58c1a41547 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid ed8abaf0-e50c-11ed-9b00-001a4aab830c -- bash -c \'set -e\nset -x\nceph orch ps\nceph orch device ls\nDEVID=$(ceph device ls | grep osd.1 | awk \'"\'"\'{print $1}\'"\'"\')\nHOST=$(ceph orch device ls | grep $DEVID | awk \'"\'"\'{print $1}\'"\'"\')\nDEV=$(ceph orch device ls | grep $DEVID | awk \'"\'"\'{print $2}\'"\'"\')\necho "host $HOST, dev $DEV, devid $DEVID"\nceph orch osd rm 1\nwhile ceph orch osd rm status | grep ^1 ; do sleep 5 ; done\nceph orch device zap $HOST $DEV --force\nceph orch daemon add osd $HOST:$DEV\nwhile ! ceph osd dump | grep osd.1 | grep up ; do sleep 5 ; done\n\''

fail 7256055 2023-04-27 14:39:55 2023-04-27 14:49:37 2023-04-27 16:53:24 2:03:47 1:32:01 0:31:46 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/classic task/test_cephadm} 1
Failure Reason:

SELinux denials found on ubuntu@smithi082.front.sepia.ceph.com: ['type=AVC msg=audit(1682614259.963:18681): avc: denied { ioctl } for pid=154450 comm="iptables" path="/var/lib/containers/storage/overlay/fa555d0a70aebe52b35dbd0ae7173cac8dd60bb4b18c7457de67b3610dd44b02/merged" dev="overlay" ino=3412108 scontext=system_u:system_r:iptables_t:s0 tcontext=system_u:object_r:container_file_t:s0:c1022,c1023 tclass=dir permissive=1']

pass 7256056 2023-04-27 14:39:56 2023-04-27 14:52:07 2023-04-27 16:30:36 1:38:29 1:00:00 0:38:29 smithi main centos 8.stream rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.stream_container_tools 1-bootstrap/octopus 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
dead 7256057 2023-04-27 14:39:57 2023-04-27 14:53:28 2023-04-28 03:04:59 12:11:31 smithi main centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-inline/no 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

hit max job timeout

fail 7256058 2023-04-27 14:39:58 2023-04-27 14:55:29 2023-04-27 17:30:05 2:34:36 1:57:29 0:37:07 smithi main centos 8.stream rados/cephadm/dashboard/{0-distro/centos_8.stream_container_tools task/test_e2e} 2
Failure Reason:

Command failed (workunit test cephadm/test_dashboard_e2e.sh) on smithi050 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=5b35a461e1c1b935b2b3fc7c43d68a58c1a41547 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_dashboard_e2e.sh'

pass 7256059 2023-04-27 14:39:59 2023-04-27 14:56:49 2023-04-27 16:33:09 1:36:20 0:58:46 0:37:34 smithi main centos 8.stream rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.stream_container_tools 1-bootstrap/16.2.4 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
fail 7256060 2023-04-27 14:39:59 2023-04-27 14:57:50 2023-04-27 15:13:55 0:16:05 0:06:24 0:09:41 smithi main ubuntu 20.04 rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 1-rook 2-workload/none 3-final cluster/3-node k8s/1.21 net/calico rook/master} 3
Failure Reason:

Command failed on smithi130 with status 1: 'sudo systemctl enable --now kubelet && sudo kubeadm config images pull'

pass 7256061 2023-04-27 14:40:00 2023-04-27 14:57:50 2023-04-27 17:04:40 2:06:50 1:32:08 0:34:42 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/connectivity task/test_cephadm} 1
pass 7256062 2023-04-27 14:40:01 2023-04-27 14:57:50 2023-04-27 17:12:30 2:14:40 1:34:16 0:40:24 smithi main centos 8.stream rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/connectivity msgr-failures/fastclose objectstore/bluestore-hybrid rados recovery-overrides/{more-async-recovery} supported-random-distro$/{centos_8} thrashers/default thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} 4
pass 7256063 2023-04-27 14:40:02 2023-04-27 15:00:31 2023-04-27 16:34:43 1:34:12 0:56:56 0:37:16 smithi main centos 8.stream rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.stream_container_tools 1-bootstrap/16.2.5 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
fail 7256064 2023-04-27 14:40:03 2023-04-27 15:01:32 2023-04-27 15:26:44 0:25:12 0:08:23 0:16:49 smithi main rados/cephadm/dashboard/{0-distro/ignorelist_health task/test_e2e} 2
Failure Reason:

Failed to fetch package version from https://shaman.ceph.com/api/search/?status=ready&project=ceph&flavor=default&distros=ubuntu%2F22.04%2Fx86_64&sha1=5b35a461e1c1b935b2b3fc7c43d68a58c1a41547

pass 7256065 2023-04-27 14:40:03 2023-04-27 15:02:12 2023-04-27 17:23:59 2:21:47 1:45:58 0:35:49 smithi main centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-inline/no 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
fail 7256066 2023-04-27 14:40:04 2023-04-27 15:08:13 2023-04-27 17:21:04 2:12:51 1:37:45 0:35:06 smithi main centos 8.stream rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-stupid rados tasks/rados_cls_all validater/lockdep} 2
Failure Reason:

"2023-04-27T17:16:48.775822+0000 mon.a (mon.0) 466 : cluster [WRN] Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)" in cluster log

pass 7256067 2023-04-27 14:40:05 2023-04-27 15:14:05 2023-04-27 17:19:19 2:05:14 1:36:22 0:28:52 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/classic task/test_cephadm} 1
dead 7256068 2023-04-27 14:40:06 2023-04-27 15:14:05 2023-04-27 16:17:25 1:03:20 0:34:11 0:29:09 smithi main centos 8.stream rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.stream_container_tools 1-bootstrap/octopus 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
Failure Reason:

{'smithi143.front.sepia.ceph.com': {'_ansible_no_log': False, 'changed': False, 'invocation': {'module_args': {'allow_downgrade': False, 'allowerasing': False, 'autoremove': False, 'bugfix': False, 'conf_file': None, 'disable_excludes': None, 'disable_gpg_check': False, 'disable_plugin': [], 'disablerepo': [], 'download_dir': None, 'download_only': False, 'enable_plugin': [], 'enablerepo': [], 'exclude': [], 'install_repoquery': True, 'install_weak_deps': True, 'installroot': '/', 'list': None, 'lock_timeout': 30, 'name': ['krb5-workstation'], 'releasever': None, 'security': False, 'skip_broken': False, 'state': 'present', 'update_cache': False, 'update_only': False, 'validate_certs': True}}, 'msg': "Failed to download metadata for repo 'CentOS-PowerTools': Yum repo downloading error: Downloading error(s): repodata/e830a7a4e881ef24680d161802ae07874dd447031dd12d47f2d3d4a911245522-primary.xml.gz - Cannot download, all mirrors were already tried without success", 'rc': 1, 'results': []}}

fail 7256069 2023-04-27 14:40:07 2023-04-27 15:14:25 2023-04-27 15:34:12 0:19:47 0:07:18 0:12:29 smithi main ubuntu 18.04 rados/rook/smoke/{0-distro/ubuntu_18.04 0-kubeadm 1-rook 2-workload/radosbench 3-final cluster/3-node k8s/1.21 net/calico rook/1.6.2} 3
Failure Reason:

Command failed on smithi031 with status 1: 'sudo systemctl enable --now kubelet && sudo kubeadm config images pull'