User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Pass | Fail |
---|---|---|---|---|---|---|---|---|---|---|
yuriw | 2023-10-15 15:10:04 | 2023-10-15 15:12:07 | 2023-10-15 15:59:28 | 0:47:21 | rados | quincy-release | smithi | 9d6d32b | 5 | 7 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
fail | 7428446 | 2023-10-15 15:11:21 | 2023-10-15 15:12:07 | 2023-10-15 15:39:59 | 0:27:52 | 0:16:40 | 0:11:12 | smithi | main | centos | 8.stream | rados/objectstore/{backends/alloc-hint supported-random-distro$/{centos_8}} | 1 | |
Failure Reason:
"1697384252.9015524 mon.a (mon.0) 70 : cluster [WRN] Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)" in cluster log |
||||||||||||||
fail | 7428448 | 2023-10-15 15:11:22 | 2023-10-15 15:12:07 | 2023-10-15 15:37:16 | 0:25:09 | 0:12:05 | 0:13:04 | smithi | main | ubuntu | 20.04 | rados/objectstore/{backends/filejournal supported-random-distro$/{ubuntu_latest}} | 1 | |
Failure Reason:
"1697384120.7849827 mon.a (mon.0) 65 : cluster [WRN] Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)" in cluster log |
||||||||||||||
fail | 7428449 | 2023-10-15 15:11:23 | 2023-10-15 15:13:28 | 2023-10-15 15:40:33 | 0:27:05 | 0:17:40 | 0:09:25 | smithi | main | rhel | 8.4 | rados/singleton/{all/test_envlibrados_for_rocksdb mon_election/connectivity msgr-failures/none msgr/async-v2only objectstore/filestore-xfs rados supported-random-distro$/{rhel_8}} | 1 | |
Failure Reason:
Command failed (workunit test rados/test_envlibrados_for_rocksdb.sh) on smithi092 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=9d6d32bb3452d5179bde2ee1cfa05df7a65f4586 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test_envlibrados_for_rocksdb.sh' |
||||||||||||||
fail | 7428450 | 2023-10-15 15:11:23 | 2023-10-15 15:13:38 | 2023-10-15 15:53:13 | 0:39:35 | 0:27:03 | 0:12:32 | smithi | main | centos | 8.stream | rados/dashboard/{0-single-container-host debug/mgr mon_election/connectivity random-objectstore$/{bluestore-bitmap} tasks/e2e} | 2 | |
Failure Reason:
Command failed (workunit test cephadm/test_dashboard_e2e.sh) on smithi029 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=9d6d32bb3452d5179bde2ee1cfa05df7a65f4586 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_dashboard_e2e.sh' |
||||||||||||||
fail | 7428451 | 2023-10-15 15:11:24 | 2023-10-15 15:13:49 | 2023-10-15 15:49:19 | 0:35:30 | 0:25:22 | 0:10:08 | smithi | main | rhel | 8.4 | rados/cephadm/workunits/{0-distro/rhel_8.4_container_tools_3.0 agent/on mon_election/connectivity task/test_nfs} | 1 | |
Failure Reason:
Test failure: test_cluster_set_reset_user_config (tasks.cephfs.test_nfs.TestNFS) |
||||||||||||||
pass | 7428452 | 2023-10-15 15:11:25 | 2023-10-15 15:13:49 | 2023-10-15 15:36:10 | 0:22:21 | 0:11:30 | 0:10:51 | smithi | main | ubuntu | 20.04 | rados/objectstore/{backends/fusestore supported-random-distro$/{ubuntu_latest}} | 1 | |
pass | 7428453 | 2023-10-15 15:11:26 | 2023-10-15 15:14:09 | 2023-10-15 15:59:28 | 0:45:19 | 0:33:27 | 0:11:52 | smithi | main | centos | 8.stream | rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/fastclose objectstore/bluestore-comp-zstd rados recovery-overrides/{default} supported-random-distro$/{centos_8} thrashers/mapgap thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} | 2 | |
pass | 7428454 | 2023-10-15 15:11:27 | 2023-10-15 15:14:30 | 2023-10-15 15:42:15 | 0:27:45 | 0:13:48 | 0:13:57 | smithi | main | ubuntu | 20.04 | rados/singleton/{all/dump-stuck mon_election/classic msgr-failures/many msgr/async-v1only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{ubuntu_latest}} | 1 | |
fail | 7428455 | 2023-10-15 15:11:27 | 2023-10-15 15:14:30 | 2023-10-15 15:51:57 | 0:37:27 | 0:25:52 | 0:11:35 | smithi | main | centos | 8.stream | rados/dashboard/{0-single-container-host debug/mgr mon_election/classic random-objectstore$/{bluestore-stupid} tasks/e2e} | 2 | |
Failure Reason:
Command failed (workunit test cephadm/test_dashboard_e2e.sh) on smithi130 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=9d6d32bb3452d5179bde2ee1cfa05df7a65f4586 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_dashboard_e2e.sh' |
||||||||||||||
pass | 7428456 | 2023-10-15 15:11:28 | 2023-10-15 15:14:50 | 2023-10-15 15:45:22 | 0:30:32 | 0:18:36 | 0:11:56 | smithi | main | ubuntu | 20.04 | rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-hybrid rados supported-random-distro$/{ubuntu_latest} tasks/libcephsqlite} | 2 | |
pass | 7428457 | 2023-10-15 15:11:29 | 2023-10-15 15:15:31 | 2023-10-15 15:57:22 | 0:41:51 | 0:30:13 | 0:11:38 | smithi | main | rhel | 8.4 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/fastclose msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{rhel_8} thrashers/default thrashosds-health workloads/small-objects} | 2 | |
fail | 7428458 | 2023-10-15 15:11:30 | 2023-10-15 15:15:51 | 2023-10-15 15:48:47 | 0:32:56 | 0:22:38 | 0:10:18 | smithi | main | centos | 8.stream | rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools_crun agent/off mon_election/classic task/test_nfs} | 1 | |
Failure Reason:
Test failure: test_cluster_set_reset_user_config (tasks.cephfs.test_nfs.TestNFS) |