User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Pass | Fail |
---|---|---|---|---|---|---|---|---|---|---|
yuriw | 2024-02-06 00:23:51 | 2024-02-06 16:54:03 | 2024-02-06 18:40:39 | 1:46:36 | rados | wip-yuri10-testing-2024-02-02-1149-pacific | smithi | ce0a401 | 5 | 19 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
fail | 7548066 | 2024-02-06 00:25:48 | 2024-02-06 16:54:03 | 2024-02-06 17:26:19 | 0:32:16 | 0:21:24 | 0:10:52 | smithi | main | ubuntu | 18.04 | rados/cephadm/smoke/{0-nvme-loop distro/ubuntu_18.04 fixed-2 mon_election/classic start} | 2 | |
Failure Reason:
"2024-02-06T17:11:44.067215+0000 mon.a (mon.0) 160 : cluster [WRN] Health check failed: 1/3 mons down, quorum a,b (MON_DOWN)" in cluster log |
||||||||||||||
fail | 7548067 | 2024-02-06 00:25:49 | 2024-02-06 16:54:44 | 2024-02-06 17:25:03 | 0:30:19 | 0:16:38 | 0:13:41 | smithi | main | centos | 8.stream | rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{centos_8} tasks/rados_cls_all} | 2 | |
Failure Reason:
"2024-02-06T17:20:55.559706+0000 mon.a (mon.0) 471 : cluster [WRN] Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)" in cluster log |
||||||||||||||
fail | 7548068 | 2024-02-06 00:25:50 | 2024-02-06 16:57:45 | 2024-02-06 17:31:45 | 0:34:00 | 0:22:54 | 0:11:06 | smithi | main | centos | 8.stream | rados/cephadm/dashboard/{0-distro/centos_8.stream_container_tools task/test_e2e} | 2 | |
Failure Reason:
Command failed (workunit test cephadm/test_dashboard_e2e.sh) on smithi122 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=ce0a401e7175623ae7f0c4552bd00c17eefaf943 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_dashboard_e2e.sh' |
||||||||||||||
fail | 7548069 | 2024-02-06 00:25:51 | 2024-02-06 16:58:35 | 2024-02-06 17:27:57 | 0:29:22 | 0:17:40 | 0:11:42 | smithi | main | rhel | 8.6 | rados/cephadm/rbd_iscsi/{base/install cluster/{fixed-3 openstack} pool/datapool supported-random-distro$/{rhel_8} workloads/ceph_iscsi} | 3 | |
Failure Reason:
'package_manager_version' |
||||||||||||||
fail | 7548070 | 2024-02-06 00:25:52 | 2024-02-06 17:01:36 | 2024-02-06 17:33:45 | 0:32:09 | 0:21:54 | 0:10:15 | smithi | main | centos | 8.stream | rados/cephadm/upgrade/{1-start-distro/1-start-centos_8.stream_container-tools 2-repo_digest/defaut 3-upgrade/simple 4-wait 5-upgrade-ls mon_election/classic} | 2 | |
Failure Reason:
Command failed on smithi096 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image docker.io/ceph/ceph:v15.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid a15007fc-c513-11ee-95b6-87774f69a715 -e sha1=ce0a401e7175623ae7f0c4552bd00c17eefaf943 -- bash -c \'ceph versions | jq -e \'"\'"\'.overall | length == 1\'"\'"\'\'' |
||||||||||||||
pass | 7548071 | 2024-02-06 00:25:53 | 2024-02-06 17:02:36 | 2024-02-06 17:46:38 | 0:44:02 | 0:34:25 | 0:09:37 | smithi | main | ubuntu | 18.04 | rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/nautilus backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{ubuntu_18.04} mon_election/classic msgr-failures/osd-delay rados thrashers/morepggrow thrashosds-health workloads/test_rbd_api} | 3 | |
pass | 7548072 | 2024-02-06 00:25:53 | 2024-02-06 17:03:07 | 2024-02-06 17:26:22 | 0:23:15 | 0:12:48 | 0:10:27 | smithi | main | ubuntu | 20.04 | rados/objectstore/{backends/ceph_objectstore_tool supported-random-distro$/{ubuntu_latest}} | 1 | |
fail | 7548073 | 2024-02-06 00:25:54 | 2024-02-06 17:04:17 | 2024-02-06 17:28:41 | 0:24:24 | 0:16:12 | 0:08:12 | smithi | main | rhel | 8.6 | rados/cephadm/smoke/{0-nvme-loop distro/rhel_8.6_container_tools_rhel8 fixed-2 mon_election/classic start} | 2 | |
Failure Reason:
"2024-02-06T17:25:28.403796+0000 mon.a (mon.0) 680 : cluster [WRN] Health check failed: Reduced data availability: 3 pgs peering (PG_AVAILABILITY)" in cluster log |
||||||||||||||
fail | 7548074 | 2024-02-06 00:25:55 | 2024-02-06 17:05:18 | 2024-02-06 17:37:25 | 0:32:07 | 0:21:35 | 0:10:32 | smithi | main | ubuntu | 18.04 | rados/cephadm/smoke/{0-nvme-loop distro/ubuntu_18.04 fixed-2 mon_election/connectivity start} | 2 | |
Failure Reason:
"2024-02-06T17:30:30.636360+0000 mon.a (mon.0) 689 : cluster [WRN] Health check failed: Reduced data availability: 2 pgs peering (PG_AVAILABILITY)" in cluster log |
||||||||||||||
fail | 7548075 | 2024-02-06 00:25:56 | 2024-02-06 17:05:48 | 2024-02-06 17:42:36 | 0:36:48 | 0:24:11 | 0:12:37 | smithi | main | centos | 8.stream | rados/cephadm/dashboard/{0-distro/ignorelist_health task/test_e2e} | 2 | |
Failure Reason:
"2024-02-06T17:39:32.139896+0000 mon.a (mon.0) 505 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)" in cluster log |
||||||||||||||
fail | 7548076 | 2024-02-06 00:25:57 | 2024-02-06 17:06:49 | 2024-02-06 17:49:06 | 0:42:17 | 0:31:26 | 0:10:51 | smithi | main | ubuntu | 20.04 | rados/cephadm/upgrade/{1-start-distro/1-start-ubuntu_20.04 2-repo_digest/defaut 3-upgrade/simple 4-wait 5-upgrade-ls mon_election/classic} | 2 | |
Failure Reason:
"2024-02-06T17:23:56.585295+0000 mon.a (mon.0) 123 : cluster [WRN] Health check failed: Reduced data availability: 1 pg inactive (PG_AVAILABILITY)" in cluster log |
||||||||||||||
pass | 7548077 | 2024-02-06 00:25:58 | 2024-02-06 17:06:59 | 2024-02-06 17:35:24 | 0:28:25 | 0:18:37 | 0:09:48 | smithi | main | centos | 8.stream | rados/monthrash/{ceph clusters/3-mons mon_election/classic msgr-failures/mon-delay msgr/async-v2only objectstore/bluestore-comp-snappy rados supported-random-distro$/{centos_8} thrashers/sync-many workloads/pool-create-delete} | 2 | |
fail | 7548078 | 2024-02-06 00:25:58 | 2024-02-06 17:08:10 | 2024-02-06 17:36:42 | 0:28:32 | 0:15:36 | 0:12:56 | smithi | main | ubuntu | 18.04 | rados/cephadm/osds/{0-distro/ubuntu_18.04 0-nvme-loop 1-start 2-ops/rm-zap-add} | 2 | |
Failure Reason:
Command failed on smithi078 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:ce0a401e7175623ae7f0c4552bd00c17eefaf943 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 9ef08864-c514-11ee-95b6-87774f69a715 -- bash -c \'set -e\nset -x\nceph orch ps\nceph orch device ls\nDEVID=$(ceph device ls | grep osd.1 | awk \'"\'"\'{print $1}\'"\'"\')\nHOST=$(ceph orch device ls | grep $DEVID | awk \'"\'"\'{print $1}\'"\'"\')\nDEV=$(ceph orch device ls | grep $DEVID | awk \'"\'"\'{print $2}\'"\'"\')\necho "host $HOST, dev $DEV, devid $DEVID"\nceph orch osd rm 1\nwhile ceph orch osd rm status | grep ^1 ; do sleep 5 ; done\nceph orch device zap $HOST $DEV --force\nceph orch daemon add osd $HOST:$DEV\nwhile ! ceph osd dump | grep osd.1 | grep up ; do sleep 5 ; done\n\'' |
||||||||||||||
fail | 7548079 | 2024-02-06 00:25:59 | 2024-02-06 17:11:10 | 2024-02-06 17:57:44 | 0:46:34 | 0:31:45 | 0:14:49 | smithi | main | centos | 8.stream | rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.stream_container_tools 1-bootstrap/octopus 1-start 2-nfs 3-upgrade-with-workload 4-final} | 2 | |
Failure Reason:
"2024-02-06T17:32:58.676645+0000 mon.smithi019 (mon.0) 68 : cluster [WRN] Health check failed: 2 stray daemon(s) not managed by cephadm (CEPHADM_STRAY_DAEMON)" in cluster log |
||||||||||||||
fail | 7548080 | 2024-02-06 00:26:00 | 2024-02-06 17:16:11 | 2024-02-06 17:49:18 | 0:33:07 | 0:21:33 | 0:11:34 | smithi | main | ubuntu | 18.04 | rados/cephadm/smoke/{0-nvme-loop distro/ubuntu_18.04 fixed-2 mon_election/classic start} | 2 | |
Failure Reason:
"2024-02-06T17:34:39.306552+0000 mon.a (mon.0) 161 : cluster [WRN] Health check failed: 1/3 mons down, quorum a,b (MON_DOWN)" in cluster log |
||||||||||||||
fail | 7548081 | 2024-02-06 00:26:01 | 2024-02-06 17:17:42 | 2024-02-06 17:52:08 | 0:34:26 | 0:23:54 | 0:10:32 | smithi | main | centos | 8.stream | rados/cephadm/dashboard/{0-distro/centos_8.stream_container_tools task/test_e2e} | 2 | |
Failure Reason:
"2024-02-06T17:47:39.556426+0000 mon.a (mon.0) 371 : cluster [WRN] Health check failed: 1 host is in maintenance mode (HOST_IN_MAINTENANCE)" in cluster log |
||||||||||||||
fail | 7548082 | 2024-02-06 00:26:02 | 2024-02-06 17:19:02 | 2024-02-06 17:46:28 | 0:27:26 | 0:16:41 | 0:10:45 | smithi | main | rhel | 8.6 | rados/cephadm/rbd_iscsi/{base/install cluster/{fixed-3 openstack} pool/datapool supported-random-distro$/{rhel_8} workloads/ceph_iscsi} | 3 | |
Failure Reason:
'package_manager_version' |
||||||||||||||
fail | 7548083 | 2024-02-06 00:26:03 | 2024-02-06 17:23:13 | 2024-02-06 18:12:30 | 0:49:17 | 0:31:23 | 0:17:54 | smithi | main | ubuntu | 20.04 | rados/cephadm/upgrade/{1-start-distro/1-start-ubuntu_20.04-15.2.9 2-repo_digest/defaut 3-upgrade/simple 4-wait 5-upgrade-ls mon_election/classic} | 2 | |
Failure Reason:
"2024-02-06T17:48:16.290775+0000 mon.a (mon.0) 178 : cluster [WRN] Health check failed: 1/3 mons down, quorum a,c (MON_DOWN)" in cluster log |
||||||||||||||
fail | 7548084 | 2024-02-06 00:26:04 | 2024-02-06 17:29:54 | 2024-02-06 17:56:00 | 0:26:06 | 0:17:44 | 0:08:22 | smithi | main | rhel | 8.6 | rados/cephadm/smoke/{0-nvme-loop distro/rhel_8.6_container_tools_rhel8 fixed-2 mon_election/classic start} | 2 | |
Failure Reason:
"2024-02-06T17:48:24.875793+0000 mon.a (mon.0) 159 : cluster [WRN] Health check failed: 1/3 mons down, quorum a,b (MON_DOWN)" in cluster log |
||||||||||||||
fail | 7548085 | 2024-02-06 00:26:04 | 2024-02-06 17:30:25 | 2024-02-06 18:02:11 | 0:31:46 | 0:21:20 | 0:10:26 | smithi | main | ubuntu | 18.04 | rados/cephadm/smoke/{0-nvme-loop distro/ubuntu_18.04 fixed-2 mon_election/connectivity start} | 2 | |
Failure Reason:
"2024-02-06T17:52:27.454122+0000 mon.a (mon.0) 482 : cluster [WRN] Health check failed: Reduced data availability: 1 pg inactive, 1 pg peering (PG_AVAILABILITY)" in cluster log |
||||||||||||||
pass | 7548086 | 2024-02-06 00:26:05 | 2024-02-06 17:30:25 | 2024-02-06 18:40:39 | 1:10:14 | 0:55:51 | 0:14:23 | smithi | main | centos | 8.stream | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-low-osd-mem-target rados tasks/rados_api_tests validater/valgrind} | 2 | |
pass | 7548087 | 2024-02-06 00:26:06 | 2024-02-06 17:35:26 | 2024-02-06 18:30:18 | 0:54:52 | 0:34:55 | 0:19:57 | smithi | main | ubuntu | 18.04 | rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/mimic backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{ubuntu_18.04} mon_election/connectivity msgr-failures/fastclose rados thrashers/pggrow thrashosds-health workloads/rbd_cls} | 3 | |
fail | 7548088 | 2024-02-06 00:26:07 | 2024-02-06 17:46:48 | 2024-02-06 18:20:31 | 0:33:43 | 0:22:31 | 0:11:12 | smithi | main | centos | 8.stream | rados/cephadm/dashboard/{0-distro/ignorelist_health task/test_e2e} | 2 | |
Failure Reason:
Command failed (workunit test cephadm/test_dashboard_e2e.sh) on smithi012 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=ce0a401e7175623ae7f0c4552bd00c17eefaf943 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_dashboard_e2e.sh' |
||||||||||||||
fail | 7548089 | 2024-02-06 00:26:07 | 2024-02-06 17:47:09 | 2024-02-06 18:29:02 | 0:41:53 | 0:31:20 | 0:10:33 | smithi | main | centos | 8.stream | rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.stream_container_tools 1-bootstrap/octopus 1-start 2-nfs 3-upgrade-with-workload 4-final} | 2 | |
Failure Reason:
"2024-02-06T18:03:44.775139+0000 mon.smithi134 (mon.0) 67 : cluster [WRN] Health check failed: 2 stray daemon(s) not managed by cephadm (CEPHADM_STRAY_DAEMON)" in cluster log |