Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
fail 7382156 2023-08-27 15:09:15 2023-08-27 15:30:30 2023-08-27 16:21:47 0:51:17 0:27:17 0:24:00 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/connectivity task/test_nfs} 1
Failure Reason:

Test failure: test_cluster_info (tasks.cephfs.test_nfs.TestNFS)

fail 7382157 2023-08-27 15:09:15 2023-08-27 15:44:52 2023-08-27 16:14:48 0:29:56 0:19:18 0:10:38 smithi main ubuntu 20.04 rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{ubuntu_latest} tasks/rados_cls_all} 2
Failure Reason:

"2023-08-27T16:09:23.079733+0000 mon.a (mon.0) 521 : cluster [WRN] Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)" in cluster log

fail 7382158 2023-08-27 15:09:16 2023-08-27 15:44:52 2023-08-27 16:59:48 1:14:56 1:04:52 0:10:04 smithi main centos 8.stream rados/dashboard/{centos_8.stream_container_tools clusters/{2-node-mgr} debug/mgr mon_election/classic random-objectstore$/{filestore-xfs} tasks/dashboard} 2
Failure Reason:

"2023-08-27T16:29:57.973736+0000 mon.a (mon.0) 3580 : cluster [WRN] Health check failed: 2 client(s) laggy due to laggy OSDs (MDS_CLIENTS_LAGGY)" in cluster log

fail 7382159 2023-08-27 15:09:17 2023-08-27 15:45:03 2023-08-27 16:22:02 0:36:59 0:27:08 0:09:51 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/classic task/test_nfs} 1
Failure Reason:

Test failure: test_cluster_info (tasks.cephfs.test_nfs.TestNFS)

fail 7382160 2023-08-27 15:09:17 2023-08-27 15:45:03 2023-08-27 16:19:47 0:34:44 0:21:33 0:13:11 smithi main centos 8.stream rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-comp-lz4 rados tasks/rados_cls_all validater/lockdep} 2
Failure Reason:

"2023-08-27T16:17:08.795438+0000 mon.a (mon.0) 1603 : cluster [WRN] Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)" in cluster log

pass 7382161 2023-08-27 15:09:18 2023-08-27 15:45:03 2023-08-27 17:34:57 1:49:54 1:40:00 0:09:54 smithi main ubuntu 20.04 rados/singleton/{all/thrash-backfill-full mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-bitmap rados supported-random-distro$/{ubuntu_latest}} 2
pass 7382162 2023-08-27 15:09:19 2023-08-27 15:45:04 2023-08-27 16:50:44 1:05:40 0:56:16 0:09:24 smithi main ubuntu 18.04 rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/mimic-v1only backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{ubuntu_18.04} mon_election/classic msgr-failures/fastclose rados thrashers/mapgap thrashosds-health workloads/snaps-few-objects} 3
fail 7382163 2023-08-27 15:09:19 2023-08-27 15:45:14 2023-08-27 16:02:54 0:17:40 0:08:02 0:09:38 smithi main rados/cephadm/dashboard/{0-distro/ignorelist_health task/test_e2e} 2
Failure Reason:

Failed to fetch package version from https://shaman.ceph.com/api/search/?status=ready&project=ceph&flavor=default&distros=ubuntu%2F22.04%2Fx86_64&sha1=c84215f518d3405f149863c6ad4474e639fb7cf5

fail 7382164 2023-08-27 15:09:20 2023-08-27 15:45:14 2023-08-27 16:10:50 0:25:36 0:16:03 0:09:33 smithi main ubuntu 18.04 rados/cephadm/osds/{0-distro/ubuntu_18.04 0-nvme-loop 1-start 2-ops/rm-zap-add} 2
Failure Reason:

Command failed on smithi052 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:c84215f518d3405f149863c6ad4474e639fb7cf5 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid b1afd164-44f2-11ee-817b-1b7a5f1ba315 -- bash -c \'set -e\nset -x\nceph orch ps\nceph orch device ls\nDEVID=$(ceph device ls | grep osd.1 | awk \'"\'"\'{print $1}\'"\'"\')\nHOST=$(ceph orch device ls | grep $DEVID | awk \'"\'"\'{print $1}\'"\'"\')\nDEV=$(ceph orch device ls | grep $DEVID | awk \'"\'"\'{print $2}\'"\'"\')\necho "host $HOST, dev $DEV, devid $DEVID"\nceph orch osd rm 1\nwhile ceph orch osd rm status | grep ^1 ; do sleep 5 ; done\nceph orch device zap $HOST $DEV --force\nceph orch daemon add osd $HOST:$DEV\nwhile ! ceph osd dump | grep osd.1 | grep up ; do sleep 5 ; done\n\''

fail 7382165 2023-08-27 15:09:21 2023-08-27 15:45:15 2023-08-27 16:22:35 0:37:20 0:27:16 0:10:04 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/connectivity task/test_nfs} 1
Failure Reason:

Test failure: test_cluster_info (tasks.cephfs.test_nfs.TestNFS)

pass 7382166 2023-08-27 15:09:21 2023-08-27 15:45:15 2023-08-27 18:58:24 3:13:09 3:03:11 0:09:58 smithi main ubuntu 18.04 rados/upgrade/nautilus-x-singleton/{0-cluster/{openstack start} 1-install/nautilus 2-partial-upgrade/firsthalf 3-thrash/default 4-workload/{rbd-cls rbd-import-export readwrite snaps-few-objects} 5-workload/{radosbench rbd_api} 6-finish-upgrade 7-pacific 8-workload/{rbd-python snaps-many-objects} bluestore-bitmap mon_election/connectivity thrashosds-health ubuntu_18.04} 4
fail 7382167 2023-08-27 15:09:22 2023-08-27 15:45:25 2023-08-27 16:21:37 0:36:12 0:25:54 0:10:18 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/classic task/test_nfs} 1
Failure Reason:

Test failure: test_cluster_info (tasks.cephfs.test_nfs.TestNFS)

fail 7382168 2023-08-27 15:09:23 2023-08-27 15:45:26 2023-08-27 15:54:45 0:09:19 smithi main rados/cephadm/dashboard/{0-distro/ignorelist_health task/test_e2e} 2
Failure Reason:

'NoneType' object has no attribute '_fields'

fail 7382169 2023-08-27 15:09:23 2023-08-27 15:45:26 2023-08-27 16:15:57 0:30:31 0:20:10 0:10:21 smithi main centos 8.stream rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-stupid rados tasks/rados_cls_all validater/lockdep} 2
Failure Reason:

"2023-08-27T16:12:59.568101+0000 mon.a (mon.0) 467 : cluster [WRN] Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)" in cluster log

fail 7382170 2023-08-27 15:09:24 2023-08-27 15:45:36 2023-08-27 16:14:12 0:28:36 0:19:11 0:09:25 smithi main rhel 8.4 rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{rhel_8} tasks/rados_cls_all} 2
Failure Reason:

"2023-08-27T16:11:29.153542+0000 mon.a (mon.0) 472 : cluster [WRN] Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)" in cluster log

fail 7382171 2023-08-27 15:09:25 2023-08-27 15:45:37 2023-08-27 16:21:36 0:35:59 0:26:00 0:09:59 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/connectivity task/test_nfs} 1
Failure Reason:

Test failure: test_cluster_info (tasks.cephfs.test_nfs.TestNFS)