Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
pass 7451771 2023-11-08 05:54:31 2023-11-08 05:55:39 2023-11-08 06:23:40 0:28:01 0:17:46 0:10:15 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools agent/on mon_election/classic task/test_cephadm} 1
fail 7451772 2023-11-08 05:54:32 2023-11-08 05:56:40 2023-11-08 10:50:37 4:53:57 4:42:12 0:11:45 smithi main ubuntu 20.04 rados/upgrade/parallel/{0-random-distro$/{ubuntu_20.04} 0-start 1-tasks mon_election/connectivity upgrade-sequence workload/{ec-rados-default rados_api rados_loadgenbig rbd_import_export test_rbd_api test_rbd_python}} 2
Failure Reason:

Command failed (workunit test cls/test_cls_2pc_queue.sh) on smithi033 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=quincy TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_2pc_queue.sh'

fail 7451773 2023-11-08 05:54:33 2023-11-08 05:57:30 2023-11-08 08:08:59 2:11:29 1:56:09 0:15:20 smithi main centos 9.stream rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async-v1only objectstore/bluestore-comp-snappy rados tasks/rados_cls_all validater/valgrind} 2
Failure Reason:

saw valgrind issues

pass 7451774 2023-11-08 05:54:34 2023-11-08 06:01:21 2023-11-08 06:33:12 0:31:51 0:21:33 0:10:18 smithi main ubuntu 22.04 rados/singleton/{all/backfill-toofull mon_election/classic msgr-failures/many msgr/async-v2only objectstore/bluestore-hybrid rados supported-random-distro$/{ubuntu_latest}} 1
pass 7451775 2023-11-08 05:54:35 2023-11-08 06:01:21 2023-11-08 06:28:50 0:27:29 0:14:32 0:12:57 smithi main ubuntu 22.04 rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/connectivity msgr-failures/few objectstore/bluestore-comp-snappy rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{ubuntu_latest} thrashers/default thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} 4
pass 7451776 2023-11-08 05:54:35 2023-11-08 06:04:32 2023-11-08 06:44:39 0:40:07 0:29:41 0:10:26 smithi main centos 8.stream rados/thrash-old-clients/{0-distro$/{centos_8.stream_container_tools} 0-size-min-size-overrides/2-size-2-min-size 1-install/nautilus-v2only backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/crush-compat mon_election/classic msgr-failures/osd-delay rados thrashers/none thrashosds-health workloads/cache-snaps} 3
pass 7451777 2023-11-08 05:54:36 2023-11-08 06:05:23 2023-11-08 06:37:49 0:32:26 0:20:07 0:12:19 smithi main centos 9.stream rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/normal mon_election/classic msgr-failures/fastclose objectstore/bluestore-comp-zstd rados recovery-overrides/{default} supported-random-distro$/{centos_latest} thrashers/pggrow thrashosds-health workloads/ec-small-objects-fast-read} 2
pass 7451778 2023-11-08 05:54:37 2023-11-08 06:08:43 2023-11-08 06:52:56 0:44:13 0:26:43 0:17:30 smithi main ubuntu 22.04 rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/few objectstore/bluestore-comp-zlib rados recovery-overrides/{more-async-recovery} supported-random-distro$/{ubuntu_latest} thrashers/pggrow thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} 2
fail 7451779 2023-11-08 05:54:38 2023-11-08 06:14:57 2023-11-08 06:46:45 0:31:48 0:19:51 0:11:57 smithi main ubuntu 20.04 rados/cephadm/workunits/{0-distro/ubuntu_20.04 agent/on mon_election/classic task/test_orch_cli} 1
Failure Reason:

Test failure: test_cephfs_mirror (tasks.cephadm_cases.test_cli.TestCephadmCLI)

pass 7451780 2023-11-08 05:54:39 2023-11-08 06:15:07 2023-11-08 06:34:45 0:19:38 0:10:16 0:09:22 smithi main centos 9.stream rados/singleton/{all/dump-stuck mon_election/classic msgr-failures/none msgr/async objectstore/bluestore-comp-lz4 rados supported-random-distro$/{centos_latest}} 1
fail 7451781 2023-11-08 05:54:40 2023-11-08 06:15:07 2023-11-08 06:42:46 0:27:39 0:16:59 0:10:40 smithi main centos 9.stream rados/singleton/{all/ec-inconsistent-hinfo mon_election/connectivity msgr-failures/few msgr/async-v1only objectstore/bluestore-comp-snappy rados supported-random-distro$/{centos_latest}} 1
Failure Reason:

"2023-11-08T06:36:26.816312+0000 osd.3 (osd.3) 4 : cluster [WRN] Error(s) ignored for 2:ad551702:::test:head enough copies available" in cluster log

fail 7451782 2023-11-08 05:54:41 2023-11-08 06:15:08 2023-11-08 06:57:04 0:41:56 0:31:10 0:10:46 smithi main centos 8.stream rados/dashboard/{0-single-container-host debug/mgr mon_election/classic random-objectstore$/{bluestore-comp-snappy} tasks/e2e} 2
Failure Reason:

Command failed (workunit test cephadm/test_dashboard_e2e.sh) on smithi052 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=10145169af62d2c41c01dd8795eeddb1b869c21a TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_dashboard_e2e.sh'

fail 7451783 2023-11-08 05:54:42 2023-11-08 06:15:48 2023-11-08 07:15:03 0:59:15 0:48:50 0:10:25 smithi main centos 9.stream rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-comp-zstd rados tasks/rados_api_tests validater/valgrind} 2
Failure Reason:

saw valgrind issues

fail 7451784 2023-11-08 05:54:42 2023-11-08 06:15:48 2023-11-08 06:38:07 0:22:19 0:11:25 0:10:54 smithi main ubuntu 22.04 rados/singleton-nomsgr/{all/lazy_omap_stats_output mon_election/connectivity rados supported-random-distro$/{ubuntu_latest}} 1
Failure Reason:

Command crashed: 'sudo TESTDIR=/home/ubuntu/cephtest bash -c ceph_test_lazy_omap_stats'

fail 7451785 2023-11-08 05:54:43 2023-11-08 06:16:19 2023-11-08 06:43:30 0:27:11 0:17:35 0:09:36 smithi main centos 9.stream rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-hybrid rados tasks/rados_cls_all validater/lockdep} 2
Failure Reason:

"2023-11-08T06:40:00.000153+0000 mon.a (mon.0) 606 : cluster [WRN] overall HEALTH_WARN 1 pool(s) do not have an application enabled" in cluster log

fail 7451786 2023-11-08 05:54:44 2023-11-08 06:16:29 2023-11-08 06:44:18 0:27:49 0:17:00 0:10:49 smithi main ubuntu 22.04 rados/singleton/{all/mon-config-keys mon_election/connectivity msgr-failures/many msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{ubuntu_latest}} 1
Failure Reason:

"2023-11-08T06:35:52.816479+0000 mon.a (mon.0) 82 : cluster [WRN] Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)" in cluster log

fail 7451787 2023-11-08 05:54:45 2023-11-08 06:16:30 2023-11-08 07:11:52 0:55:22 0:42:25 0:12:57 smithi main centos 9.stream rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-low-osd-mem-target rados tasks/mon_recovery validater/valgrind} 2
Failure Reason:

saw valgrind issues

fail 7451788 2023-11-08 05:54:46 2023-11-08 06:18:50 2023-11-08 11:11:09 4:52:19 4:39:28 0:12:51 smithi main ubuntu 20.04 rados/upgrade/parallel/{0-random-distro$/{ubuntu_20.04} 0-start 1-tasks mon_election/classic upgrade-sequence workload/{ec-rados-default rados_api rados_loadgenbig rbd_import_export test_rbd_api test_rbd_python}} 2
Failure Reason:

Command failed (workunit test cls/test_cls_2pc_queue.sh) on smithi006 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=quincy TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_2pc_queue.sh'

pass 7451789 2023-11-08 05:54:47 2023-11-08 06:18:51 2023-11-08 06:44:35 0:25:44 0:19:28 0:06:16 smithi main rhel 8.6 rados/cephadm/workunits/{0-distro/rhel_8.6_container_tools_rhel8 agent/on mon_election/classic task/test_orch_cli} 1
pass 7451790 2023-11-08 05:54:47 2023-11-08 06:19:21 2023-11-08 07:05:39 0:46:18 0:33:01 0:13:17 smithi main centos 9.stream rados/singleton-bluestore/{all/cephtool mon_election/connectivity msgr-failures/many msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{centos_latest}} 1
fail 7451791 2023-11-08 05:54:48 2023-11-08 06:22:22 2023-11-08 08:39:57 2:17:35 2:06:11 0:11:24 smithi main centos 9.stream rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async-v1only objectstore/bluestore-bitmap rados tasks/rados_cls_all validater/valgrind} 2
Failure Reason:

saw valgrind issues

fail 7451792 2023-11-08 05:54:49 2023-11-08 06:23:22 2023-11-08 07:02:46 0:39:24 0:29:35 0:09:49 smithi main centos 8.stream rados/dashboard/{0-single-container-host debug/mgr mon_election/connectivity random-objectstore$/{bluestore-bitmap} tasks/e2e} 2
Failure Reason:

Command failed (workunit test cephadm/test_dashboard_e2e.sh) on smithi124 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=10145169af62d2c41c01dd8795eeddb1b869c21a TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_dashboard_e2e.sh'

fail 7451793 2023-11-08 05:54:50 2023-11-08 06:23:42 2023-11-08 06:42:56 0:19:14 0:09:35 0:09:39 smithi main centos 9.stream rados/singleton-nomsgr/{all/lazy_omap_stats_output mon_election/classic rados supported-random-distro$/{centos_latest}} 1
Failure Reason:

Command crashed: 'sudo TESTDIR=/home/ubuntu/cephtest bash -c ceph_test_lazy_omap_stats'

fail 7451794 2023-11-08 05:54:51 2023-11-08 06:23:53 2023-11-08 06:54:50 0:30:57 0:21:38 0:09:19 smithi main centos 9.stream rados/valgrind-leaks/{1-start 2-inject-leak/none centos_latest} 1
Failure Reason:

saw valgrind issues

fail 7451795 2023-11-08 05:54:52 2023-11-08 06:23:53 2023-11-08 07:34:32 1:10:39 0:59:47 0:10:52 smithi main centos 9.stream rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-comp-snappy rados tasks/rados_api_tests validater/valgrind} 2
Failure Reason:

saw valgrind issues