User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Pass | Fail |
---|---|---|---|---|---|---|---|---|---|---|
yuriw | 2023-10-07 14:00:24 | 2023-10-07 14:03:10 | 2023-10-07 17:09:17 | 3:06:07 | rados | wip-yuri6-testing-2023-10-06-0904-quincy | smithi | fedcea8 | 2 | 8 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
fail | 7416660 | 2023-10-07 14:01:49 | 2023-10-07 14:03:10 | 2023-10-07 14:24:18 | 0:21:08 | 0:12:45 | 0:08:23 | smithi | main | centos | 8.stream | rados/objectstore/{backends/alloc-hint supported-random-distro$/{centos_8}} | 1 | |
Failure Reason:
"1696688543.4744868 mon.a (mon.0) 68 : cluster [WRN] Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)" in cluster log |
||||||||||||||
fail | 7416663 | 2023-10-07 14:01:50 | 2023-10-07 14:03:10 | 2023-10-07 14:24:15 | 0:21:05 | 0:12:00 | 0:09:05 | smithi | main | centos | 8.stream | rados/objectstore/{backends/filejournal supported-random-distro$/{centos_8}} | 1 | |
Failure Reason:
"1696688541.1360362 mon.a (mon.0) 70 : cluster [WRN] Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)" in cluster log |
||||||||||||||
fail | 7416666 | 2023-10-07 14:01:51 | 2023-10-07 14:03:10 | 2023-10-07 14:35:04 | 0:31:54 | 0:21:46 | 0:10:08 | smithi | main | centos | 8.stream | rados/dashboard/{0-single-container-host debug/mgr mon_election/connectivity random-objectstore$/{bluestore-comp-lz4} tasks/e2e} | 2 | |
Failure Reason:
Command failed (workunit test cephadm/test_dashboard_e2e.sh) on smithi031 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=fedcea84a4bd31f0708715b39e04a135187af2ea TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_dashboard_e2e.sh' |
||||||||||||||
pass | 7416669 | 2023-10-07 14:01:52 | 2023-10-07 14:04:01 | 2023-10-07 16:17:50 | 2:13:49 | 2:05:55 | 0:07:54 | smithi | main | rhel | 8.4 | rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/normal mon_election/connectivity msgr-failures/few objectstore/filestore-xfs rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{rhel_8} thrashers/fastread thrashosds-health workloads/ec-radosbench} | 2 | |
fail | 7416672 | 2023-10-07 14:01:53 | 2023-10-07 14:04:31 | 2023-10-07 17:02:48 | 2:58:17 | 2:46:45 | 0:11:32 | smithi | main | centos | 8.stream | rados/objectstore/{backends/filestore-idempotent-aio-journal supported-random-distro$/{centos_8}} | 1 | |
Failure Reason:
"1696688710.704156 mon.a (mon.0) 70 : cluster [WRN] Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)" in cluster log |
||||||||||||||
fail | 7416675 | 2023-10-07 14:01:54 | 2023-10-07 14:05:41 | 2023-10-07 16:37:41 | 2:32:00 | 2:22:53 | 0:09:07 | smithi | main | centos | 8.stream | rados/objectstore/{backends/filestore-idempotent supported-random-distro$/{centos_8}} | 1 | |
Failure Reason:
"1696688732.1357138 mon.a (mon.0) 70 : cluster [WRN] Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)" in cluster log |
||||||||||||||
fail | 7416678 | 2023-10-07 14:01:54 | 2023-10-07 14:06:32 | 2023-10-07 14:45:48 | 0:39:16 | 0:30:06 | 0:09:10 | smithi | main | rhel | 8.4 | rados/cephadm/workunits/{0-distro/rhel_8.4_container_tools_3.0 agent/on mon_election/connectivity task/test_nfs} | 1 | |
Failure Reason:
Test failure: test_non_existent_cluster (tasks.cephfs.test_nfs.TestNFS) |
||||||||||||||
pass | 7416681 | 2023-10-07 14:01:55 | 2023-10-07 14:06:33 | 2023-10-07 17:09:17 | 3:02:44 | 2:34:43 | 0:28:01 | smithi | main | ubuntu | 20.04 | rados/objectstore/{backends/objectstore-bluestore-a supported-random-distro$/{ubuntu_latest}} | 1 | |
fail | 7416685 | 2023-10-07 14:01:56 | 2023-10-07 14:06:43 | 2023-10-07 14:38:00 | 0:31:17 | 0:21:03 | 0:10:14 | smithi | main | centos | 8.stream | rados/dashboard/{0-single-container-host debug/mgr mon_election/classic random-objectstore$/{bluestore-comp-zlib} tasks/e2e} | 2 | |
Failure Reason:
Command failed (workunit test cephadm/test_dashboard_e2e.sh) on smithi012 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=fedcea84a4bd31f0708715b39e04a135187af2ea TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_dashboard_e2e.sh' |
||||||||||||||
fail | 7416688 | 2023-10-07 14:01:57 | 2023-10-07 14:40:16 | 1396 | smithi | main | centos | 8.stream | rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools_crun agent/off mon_election/classic task/test_nfs} | 1 | ||||
Failure Reason:
Test failure: test_non_existent_cluster (tasks.cephfs.test_nfs.TestNFS) |