Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
fail 7555048 2024-02-09 19:43:49 2024-02-10 05:02:05 2024-02-10 05:20:59 0:18:54 0:07:11 0:11:43 smithi main ubuntu 22.04 upgrade:quincy-x/stress-split/{0-distro/ubuntu_22.04 0-roles 1-start 2-first-half-tasks/rbd_api 3-stress-tasks/{radosbench rbd-cls rbd-import-export rbd_api readwrite snaps-few-objects} 4-second-half-tasks/radosbench mon_election/classic} 2
Failure Reason:

Failed to fetch package version from https://shaman.ceph.com/api/search/?status=ready&project=ceph&flavor=default&distros=ubuntu%2F22.04%2Fx86_64&ref=quincy

fail 7555049 2024-02-09 19:43:50 2024-02-10 05:03:16 2024-02-10 07:36:37 2:33:21 2:23:56 0:09:25 smithi main centos 9.stream upgrade:quincy-x/stress-split/{0-distro/centos_9.stream 0-roles 1-start 2-first-half-tasks/readwrite 3-stress-tasks/{radosbench rbd-cls rbd-import-export rbd_api readwrite snaps-few-objects} 4-second-half-tasks/rbd-import-export mon_election/connectivity} 2
Failure Reason:

"1707542636.2019799 mon.a (mon.0) 523 : cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log

pass 7555050 2024-02-09 19:43:51 2024-02-10 05:03:36 2024-02-10 05:22:21 0:18:45 0:08:54 0:09:51 smithi main ubuntu 20.04 upgrade:quincy-x/filestore-remove-check/{0-cluster/{openstack start} 1-ceph-install/quincy 2 - upgrade objectstore/filestore-xfs ubuntu_20.04} 1
fail 7555051 2024-02-09 19:43:52 2024-02-10 05:04:17 2024-02-10 06:32:31 1:28:14 1:16:14 0:12:00 smithi main centos 9.stream upgrade:quincy-x/parallel/{0-random-distro$/{centos_9.stream_runc} 0-start 1-tasks mon_election/classic upgrade-sequence workload/{ec-rados-default rados_api rados_loadgenbig rbd_import_export test_rbd_api test_rbd_python}} 2
Failure Reason:

Command failed (workunit test rbd/test_librbd_python.sh) on smithi178 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=quincy TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 RBD_FEATURES=61 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rbd/test_librbd_python.sh'

fail 7555052 2024-02-09 19:43:53 2024-02-10 05:05:07 2024-02-10 07:54:29 2:49:22 2:38:23 0:10:59 smithi main centos 9.stream upgrade:quincy-x/stress-split/{0-distro/centos_9.stream_runc 0-roles 1-start 2-first-half-tasks/snaps-few-objects 3-stress-tasks/{radosbench rbd-cls rbd-import-export rbd_api readwrite snaps-few-objects} 4-second-half-tasks/radosbench mon_election/classic} 2
Failure Reason:

"1707543000.000125 mon.a (mon.0) 740 : cluster [ERR] Health detail: HEALTH_ERR 1 filesystem is offline; 1 filesystem is online with fewer MDS than max_mds" in cluster log

fail 7555053 2024-02-09 19:43:54 2024-02-10 05:05:08 2024-02-10 07:53:46 2:48:38 2:33:56 0:14:42 smithi main centos 9.stream upgrade:quincy-x/stress-split/{0-distro/centos_9.stream_runc 0-roles 1-start 2-first-half-tasks/radosbench 3-stress-tasks/{radosbench rbd-cls rbd-import-export rbd_api readwrite snaps-few-objects} 4-second-half-tasks/rbd-import-export mon_election/connectivity} 2
Failure Reason:

"1707543084.2042348 mon.a (mon.0) 509 : cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log

fail 7555054 2024-02-09 19:43:55 2024-02-10 05:10:29 2024-02-10 05:28:16 0:17:47 0:07:13 0:10:34 smithi main ubuntu 22.04 upgrade:quincy-x/stress-split/{0-distro/ubuntu_22.04 0-roles 1-start 2-first-half-tasks/rbd-cls 3-stress-tasks/{radosbench rbd-cls rbd-import-export rbd_api readwrite snaps-few-objects} 4-second-half-tasks/radosbench mon_election/classic} 2
Failure Reason:

Failed to fetch package version from https://shaman.ceph.com/api/search/?status=ready&project=ceph&flavor=default&distros=ubuntu%2F22.04%2Fx86_64&ref=quincy

fail 7555055 2024-02-09 19:43:56 2024-02-10 05:10:39 2024-02-10 07:42:05 2:31:26 2:22:23 0:09:03 smithi main centos 9.stream upgrade:quincy-x/stress-split/{0-distro/centos_9.stream 0-roles 1-start 2-first-half-tasks/rbd-import-export 3-stress-tasks/{radosbench rbd-cls rbd-import-export rbd_api readwrite snaps-few-objects} 4-second-half-tasks/rbd-import-export mon_election/connectivity} 2
Failure Reason:

"1707543010.999864 mon.a (mon.0) 364 : cluster [WRN] Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)" in cluster log

fail 7555056 2024-02-09 19:43:57 2024-02-10 05:10:39 2024-02-10 06:39:23 1:28:44 1:16:14 0:12:30 smithi main centos 9.stream upgrade:quincy-x/parallel/{0-random-distro$/{centos_9.stream} 0-start 1-tasks mon_election/connectivity upgrade-sequence workload/{ec-rados-default rados_api rados_loadgenbig rbd_import_export test_rbd_api test_rbd_python}} 2
Failure Reason:

Command failed (workunit test rbd/test_librbd_python.sh) on smithi083 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=quincy TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 RBD_FEATURES=61 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rbd/test_librbd_python.sh'

fail 7555057 2024-02-09 19:43:57 2024-02-10 05:12:20 2024-02-10 08:17:37 3:05:17 2:55:04 0:10:13 smithi main centos 9.stream upgrade:quincy-x/stress-split/{0-distro/centos_9.stream_runc 0-roles 1-start 2-first-half-tasks/rbd_api 3-stress-tasks/{radosbench rbd-cls rbd-import-export rbd_api readwrite snaps-few-objects} 4-second-half-tasks/radosbench mon_election/classic} 2
Failure Reason:

"1707543199.5088136 mon.a (mon.0) 563 : cluster [WRN] Health check failed: Degraded data redundancy: 2/6 objects degraded (33.333%), 1 pg degraded (PG_DEGRADED)" in cluster log