User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Pass | Fail | Dead |
---|---|---|---|---|---|---|---|---|---|---|---|
yuriw | 2024-05-02 19:07:25 | 2024-05-04 10:36:42 | 2024-05-04 23:21:57 | 12:45:15 | rados | wip-yuri4-testing-2024-04-29-0642 | smithi | 09dbd6b | 14 | 50 | 9 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
fail | 7686167 | 2024-05-02 19:08:49 | 2024-05-04 10:36:39 | 2024-05-04 11:04:22 | 0:27:43 | 0:16:46 | 0:10:57 | smithi | main | ubuntu | 22.04 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/read mon_election/connectivity msgr-failures/osd-dispatch-delay msgr/async-v2only objectstore/bluestore-comp-zlib rados supported-random-distro$/{ubuntu_latest} thrashers/default thrashosds-health workloads/cache} | 2 | |
Failure Reason:
"2024-05-04T10:59:23.120776+0000 osd.5 (osd.5) 11 : cluster [ERR] osd.5 pg[3.2]: reservation requested while still reserved" in cluster log |
||||||||||||||
pass | 7686168 | 2024-05-02 19:08:50 | 2024-05-04 10:36:39 | 2024-05-04 11:10:51 | 0:34:12 | 0:24:20 | 0:09:52 | smithi | main | centos | 9.stream | rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/connectivity msgr-failures/osd-delay objectstore/bluestore-low-osd-mem-target rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{centos_latest} thrashers/default thrashosds-health workloads/ec-rados-plugin=clay-k=4-m=2} | 3 | |
fail | 7686169 | 2024-05-02 19:08:51 | 2024-05-04 10:36:39 | 2024-05-04 10:56:35 | 0:19:56 | 0:09:22 | 0:10:34 | smithi | main | centos | 8.stream | rados/thrash-old-clients/{0-distro$/{centos_8.stream_container_tools} 0-size-min-size-overrides/2-size-2-min-size 1-install/quincy backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/crush-compat mon_election/classic msgr-failures/osd-delay rados thrashers/careful thrashosds-health workloads/test_rbd_api} | 3 | |
Failure Reason:
Command failed on smithi007 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:09dbd6bba52d19ba471c33bc15c008e6ad158ea6 pull' |
||||||||||||||
fail | 7686170 | 2024-05-02 19:08:52 | 2024-05-04 10:36:40 | 2024-05-04 10:50:28 | 0:13:48 | 0:04:04 | 0:09:44 | smithi | main | centos | 9.stream | rados/cephadm/osds/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-ops/rm-zap-add} | 2 | |
Failure Reason:
Command failed on smithi028 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:09dbd6bba52d19ba471c33bc15c008e6ad158ea6 pull' |
||||||||||||||
fail | 7686171 | 2024-05-02 19:08:53 | 2024-05-04 10:36:40 | 2024-05-04 12:31:58 | 1:55:18 | 1:48:25 | 0:06:53 | smithi | main | centos | 9.stream | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-low-osd-mem-target rados tasks/rados_cls_all validater/valgrind} | 2 | |
Failure Reason:
valgrind error: Leak_StillReachable operator new[](unsigned long) UnknownInlinedFun UnknownInlinedFun |
||||||||||||||
fail | 7686172 | 2024-05-02 19:08:54 | 2024-05-04 10:36:41 | 2024-05-04 12:32:38 | 1:55:57 | 1:46:12 | 0:09:45 | smithi | main | ubuntu | 22.04 | rados/standalone/{supported-random-distro$/{ubuntu_latest} workloads/scrub} | 1 | |
Failure Reason:
Command failed (workunit test scrub/osd-scrub-repair.sh) on smithi104 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=09dbd6bba52d19ba471c33bc15c008e6ad158ea6 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/standalone/scrub/osd-scrub-repair.sh' |
||||||||||||||
fail | 7686173 | 2024-05-02 19:08:55 | 2024-05-04 10:36:41 | 2024-05-04 10:57:20 | 0:20:39 | 0:09:23 | 0:11:16 | smithi | main | ubuntu | 22.04 | rados/cephadm/workunits/{0-distro/ubuntu_22.04 agent/on mon_election/connectivity task/test_ca_signed_key} | 2 | |
Failure Reason:
Command failed on smithi063 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:09dbd6bba52d19ba471c33bc15c008e6ad158ea6 pull' |
||||||||||||||
fail | 7686174 | 2024-05-02 19:08:56 | 2024-05-04 10:36:42 | 2024-05-04 14:19:56 | 3:43:14 | 3:33:29 | 0:09:45 | smithi | main | centos | 9.stream | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/read mon_election/connectivity msgr-failures/osd-dispatch-delay msgr/async objectstore/bluestore-stupid rados supported-random-distro$/{centos_latest} thrashers/pggrow thrashosds-health workloads/rados_api_tests} | 2 | |
Failure Reason:
Command failed (workunit test rados/test.sh) on smithi073 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=09dbd6bba52d19ba471c33bc15c008e6ad158ea6 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test.sh' |
||||||||||||||
pass | 7686175 | 2024-05-02 19:08:57 | 2024-05-04 10:36:42 | 2024-05-04 11:20:31 | 0:43:49 | 0:33:09 | 0:10:40 | smithi | main | ubuntu | 22.04 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-5} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/upmap-read mon_election/classic msgr-failures/fastclose msgr/async-v1only objectstore/bluestore-bitmap rados supported-random-distro$/{ubuntu_latest} thrashers/careful thrashosds-health workloads/radosbench-high-concurrency} | 2 | |
fail | 7686176 | 2024-05-02 19:08:58 | 2024-05-04 10:36:43 | 2024-05-04 11:15:13 | 0:38:30 | 0:27:50 | 0:10:40 | smithi | main | ubuntu | 22.04 | rados/singleton-bluestore/{all/cephtool mon_election/connectivity msgr-failures/none msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{ubuntu_latest}} | 1 | |
Failure Reason:
Command failed (workunit test cephtool/test.sh) on smithi161 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=09dbd6bba52d19ba471c33bc15c008e6ad158ea6 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh' |
||||||||||||||
dead | 7686177 | 2024-05-02 19:08:59 | 2024-05-04 10:36:43 | 2024-05-04 10:41:43 | 0:05:00 | smithi | main | centos | 8.stream | rados/thrash-old-clients/{0-distro$/{centos_8.stream_container_tools} 0-size-min-size-overrides/3-size-2-min-size 1-install/reef backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/on mon_election/connectivity msgr-failures/fastclose rados thrashers/default thrashosds-health workloads/cache-snaps} | 3 | |||
Failure Reason:
Error reimaging machines: Expected smithi112's OS to be centos 8 but found centos 9 |
||||||||||||||
dead | 7686178 | 2024-05-02 19:09:00 | 2024-05-04 10:36:43 | 2024-05-04 10:43:44 | 0:07:01 | smithi | main | centos | 9.stream | rados/cephadm/osds/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-ops/rm-zap-flag} | 2 | |||
Failure Reason:
SSH connection to smithi112 was lost: 'sudo yum install -y kernel' |
||||||||||||||
dead | 7686179 | 2024-05-02 19:09:01 | 2024-05-04 10:36:44 | 2024-05-04 22:47:47 | 12:11:03 | smithi | main | ubuntu | 22.04 | rados/singleton-nomsgr/{all/admin_socket_output mon_election/classic rados supported-random-distro$/{ubuntu_latest}} | 1 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
dead | 7686180 | 2024-05-02 19:09:02 | 2024-05-04 10:36:44 | 2024-05-04 22:46:36 | 12:09:52 | smithi | main | centos | 9.stream | rados/upgrade/parallel/{0-random-distro$/{centos_9.stream_runc} 0-start 1-tasks mon_election/classic upgrade-sequence workload/{ec-rados-default rados_api rados_loadgenbig rbd_import_export test_rbd_api test_rbd_python}} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 7686181 | 2024-05-02 19:09:03 | 2024-05-04 10:36:45 | 2024-05-04 10:57:40 | 0:20:55 | 0:15:28 | 0:05:27 | smithi | main | centos | 9.stream | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{default} 3-scrub-overrides/{max-simultaneous-scrubs-1} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/read mon_election/connectivity msgr-failures/osd-dispatch-delay msgr/async-v1only objectstore/bluestore-comp-zlib rados supported-random-distro$/{centos_latest} thrashers/morepggrow thrashosds-health workloads/redirect_promote_tests} | 2 | |
fail | 7686182 | 2024-05-02 19:09:04 | 2024-05-04 10:36:45 | 2024-05-04 10:49:30 | 0:12:45 | 0:04:09 | 0:08:36 | smithi | main | centos | 9.stream | rados/cephadm/smoke/{0-distro/centos_9.stream 0-nvme-loop agent/off fixed-2 mon_election/classic start} | 2 | |
Failure Reason:
Command failed on smithi018 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:09dbd6bba52d19ba471c33bc15c008e6ad158ea6 pull' |
||||||||||||||
pass | 7686183 | 2024-05-02 19:09:05 | 2024-05-04 10:38:46 | 2024-05-04 11:06:35 | 0:27:49 | 0:19:40 | 0:08:09 | smithi | main | centos | 9.stream | rados/thrash-erasure-code-overwrites/{bluestore-bitmap ceph clusters/{fixed-2 openstack} fast/fast mon_election/classic msgr-failures/osd-delay rados recovery-overrides/{more-async-recovery} supported-random-distro$/{centos_latest} thrashers/morepggrow thrashosds-health workloads/ec-small-objects-fast-read-overwrites} | 2 | |
fail | 7686184 | 2024-05-02 19:09:06 | 2024-05-04 10:39:16 | 2024-05-04 10:57:20 | 0:18:04 | 0:07:23 | 0:10:41 | smithi | main | ubuntu | 22.04 | rados/cephadm/smoke-singlehost/{0-random-distro$/{ubuntu_22.04} 1-start 2-services/basic 3-final} | 1 | |
Failure Reason:
Command failed on smithi026 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:09dbd6bba52d19ba471c33bc15c008e6ad158ea6 pull' |
||||||||||||||
fail | 7686185 | 2024-05-02 19:09:07 | 2024-05-04 10:40:17 | 2024-05-04 11:14:06 | 0:33:49 | 0:22:46 | 0:11:03 | smithi | main | ubuntu | 22.04 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{default} 3-scrub-overrides/{max-simultaneous-scrubs-5} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/read mon_election/connectivity msgr-failures/osd-dispatch-delay msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{ubuntu_latest} thrashers/default thrashosds-health workloads/small-objects-localized} | 2 | |
Failure Reason:
"2024-05-04T11:05:41.400489+0000 osd.4 (osd.4) 52 : cluster [ERR] osd.4 pg[3.12]: reservation requested while still reserved" in cluster log |
||||||||||||||
fail | 7686186 | 2024-05-02 19:09:08 | 2024-05-04 10:40:17 | 2024-05-04 10:59:24 | 0:19:07 | 0:09:26 | 0:09:41 | smithi | main | centos | 8.stream | rados/thrash-old-clients/{0-distro$/{centos_8.stream_container_tools} 0-size-min-size-overrides/2-size-2-min-size 1-install/nautilus-v1only backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/crush-compat mon_election/classic msgr-failures/few rados thrashers/mapgap thrashosds-health workloads/radosbench} | 3 | |
Failure Reason:
Command failed on smithi031 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:09dbd6bba52d19ba471c33bc15c008e6ad158ea6 pull' |
||||||||||||||
fail | 7686187 | 2024-05-02 19:09:10 | 2024-05-04 10:40:38 | 2024-05-04 11:01:24 | 0:20:46 | 0:13:26 | 0:07:20 | smithi | main | centos | 9.stream | rados/singleton/{all/mon-config-keys mon_election/connectivity msgr-failures/many msgr/async-v1only objectstore/bluestore-comp-snappy rados supported-random-distro$/{centos_latest}} | 1 | |
Failure Reason:
"2024-05-04T10:52:38.187804+0000 mon.a (mon.0) 86 : cluster [WRN] Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)" in cluster log |
||||||||||||||
fail | 7686188 | 2024-05-02 19:09:11 | 2024-05-04 10:40:38 | 2024-05-04 10:58:09 | 0:17:31 | 0:07:23 | 0:10:08 | smithi | main | ubuntu | 22.04 | rados/cephadm/osds/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-ops/rm-zap-wait} | 2 | |
Failure Reason:
Command failed on smithi086 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:09dbd6bba52d19ba471c33bc15c008e6ad158ea6 pull' |
||||||||||||||
fail | 7686189 | 2024-05-02 19:09:12 | 2024-05-04 10:40:39 | 2024-05-04 11:01:18 | 0:20:39 | 0:09:40 | 0:10:59 | smithi | main | ubuntu | 22.04 | rados/cephadm/workunits/{0-distro/ubuntu_22.04 agent/off mon_election/classic task/test_extra_daemon_features} | 2 | |
Failure Reason:
Command failed on smithi029 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:09dbd6bba52d19ba471c33bc15c008e6ad158ea6 pull' |
||||||||||||||
pass | 7686190 | 2024-05-02 19:09:13 | 2024-05-04 10:40:39 | 2024-05-04 11:15:19 | 0:34:40 | 0:26:16 | 0:08:24 | smithi | main | centos | 9.stream | rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/fast mon_election/classic msgr-failures/fastclose objectstore/bluestore-hybrid rados recovery-overrides/{more-async-recovery} supported-random-distro$/{centos_latest} thrashers/pggrow thrashosds-health workloads/ec-rados-plugin=jerasure-k=3-m=1} | 2 | |
fail | 7686191 | 2024-05-02 19:09:14 | 2024-05-04 10:41:50 | 2024-05-04 11:12:57 | 0:31:07 | 0:24:54 | 0:06:13 | smithi | main | centos | 9.stream | rados/singleton-bluestore/{all/cephtool mon_election/classic msgr-failures/none msgr/async-v1only objectstore/bluestore-bitmap rados supported-random-distro$/{centos_latest}} | 1 | |
Failure Reason:
Command failed (workunit test cephtool/test.sh) on smithi112 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=09dbd6bba52d19ba471c33bc15c008e6ad158ea6 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh' |
||||||||||||||
pass | 7686192 | 2024-05-02 19:09:15 | 2024-05-04 10:42:00 | 2024-05-04 11:10:11 | 0:28:11 | 0:20:07 | 0:08:04 | smithi | main | centos | 9.stream | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/crush-compat mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-hybrid rados supported-random-distro$/{centos_latest} thrashers/default thrashosds-health workloads/admin_socket_objecter_requests} | 2 | |
fail | 7686193 | 2024-05-02 19:09:16 | 2024-05-04 10:43:01 | 2024-05-04 10:53:55 | 0:10:54 | 0:04:11 | 0:06:43 | smithi | main | centos | 9.stream | rados/cephadm/osds/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-ops/rmdir-reactivate} | 2 | |
Failure Reason:
Command failed on smithi032 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:09dbd6bba52d19ba471c33bc15c008e6ad158ea6 pull' |
||||||||||||||
fail | 7686194 | 2024-05-02 19:09:18 | 2024-05-04 10:43:11 | 2024-05-04 11:03:35 | 0:20:24 | 0:09:28 | 0:10:56 | smithi | main | centos | 8.stream | rados/thrash-old-clients/{0-distro$/{centos_8.stream_container_tools} 0-size-min-size-overrides/3-size-2-min-size 1-install/nautilus-v2only backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/on mon_election/connectivity msgr-failures/osd-delay rados thrashers/morepggrow thrashosds-health workloads/rbd_cls} | 3 | |
Failure Reason:
Command failed on smithi008 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:09dbd6bba52d19ba471c33bc15c008e6ad158ea6 pull' |
||||||||||||||
fail | 7686195 | 2024-05-02 19:09:19 | 2024-05-04 10:44:02 | 2024-05-04 11:08:01 | 0:23:59 | 0:16:30 | 0:07:29 | smithi | main | centos | 9.stream | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-1} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/on mon_election/classic msgr-failures/osd-delay msgr/async objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{centos_latest} thrashers/mapgap thrashosds-health workloads/cache-agent-big} | 2 | |
Failure Reason:
"2024-05-04T11:02:41.900553+0000 osd.1 (osd.1) 84 : cluster [ERR] osd.1 pg[4.2]: reservation requested while still reserved" in cluster log |
||||||||||||||
fail | 7686196 | 2024-05-02 19:09:20 | 2024-05-04 10:44:42 | 2024-05-04 11:00:05 | 0:15:23 | 0:06:24 | 0:08:59 | smithi | main | centos | 9.stream | rados/dashboard/{0-single-container-host debug/mgr mon_election/connectivity random-objectstore$/{bluestore-hybrid} tasks/e2e} | 2 | |
Failure Reason:
Command failed on smithi098 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:09dbd6bba52d19ba471c33bc15c008e6ad158ea6 pull' |
||||||||||||||
fail | 7686197 | 2024-05-02 19:09:21 | 2024-05-04 10:46:53 | 2024-05-04 11:08:14 | 0:21:21 | 0:14:57 | 0:06:24 | smithi | main | centos | 9.stream | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-1} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/read mon_election/connectivity msgr-failures/osd-dispatch-delay msgr/async-v1only objectstore/bluestore-stupid rados supported-random-distro$/{centos_latest} thrashers/morepggrow thrashosds-health workloads/cache-agent-small} | 2 | |
Failure Reason:
"2024-05-04T11:05:33.791269+0000 osd.0 (osd.0) 24 : cluster [ERR] osd.0 pg[4.0]: reservation requested while still reserved" in cluster log |
||||||||||||||
fail | 7686198 | 2024-05-02 19:09:23 | 2024-05-04 10:47:14 | 2024-05-04 11:07:26 | 0:20:12 | 0:09:57 | 0:10:15 | smithi | main | ubuntu | 22.04 | rados/singleton-nomsgr/{all/lazy_omap_stats_output mon_election/connectivity rados supported-random-distro$/{ubuntu_latest}} | 1 | |
Failure Reason:
Command crashed: 'sudo TESTDIR=/home/ubuntu/cephtest bash -c ceph_test_lazy_omap_stats' |
||||||||||||||
fail | 7686199 | 2024-05-02 19:09:24 | 2024-05-04 10:47:14 | 2024-05-04 12:56:47 | 2:09:33 | 2:01:40 | 0:07:53 | smithi | main | centos | 9.stream | rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/normal mon_election/connectivity msgr-failures/few objectstore/bluestore-low-osd-mem-target rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{centos_latest} thrashers/careful thrashosds-health workloads/ec-radosbench} | 2 | |
Failure Reason:
"2024-05-04T11:36:31.065477+0000 osd.3 (osd.3) 113 : cluster [ERR] osd.3 pg[7.fs1]: reservation requested while still reserved" in cluster log |
||||||||||||||
pass | 7686200 | 2024-05-02 19:09:25 | 2024-05-04 10:47:35 | 2024-05-04 11:18:24 | 0:30:49 | 0:19:32 | 0:11:17 | smithi | main | centos | 9.stream | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-1} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/crush-compat mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-comp-lz4 rados supported-random-distro$/{centos_latest} thrashers/pggrow thrashosds-health workloads/cache-pool-snaps} | 2 | |
fail | 7686201 | 2024-05-02 19:09:26 | 2024-05-04 10:51:06 | 2024-05-04 11:03:36 | 0:12:30 | 0:04:00 | 0:08:30 | smithi | main | centos | 9.stream | rados/cephadm/smoke/{0-distro/centos_9.stream_runc 0-nvme-loop agent/on fixed-2 mon_election/connectivity start} | 2 | |
Failure Reason:
Command failed on smithi070 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:09dbd6bba52d19ba471c33bc15c008e6ad158ea6 pull' |
||||||||||||||
fail | 7686202 | 2024-05-02 19:09:27 | 2024-05-04 10:52:17 | 2024-05-04 11:18:52 | 0:26:35 | 0:16:59 | 0:09:36 | smithi | main | centos | 9.stream | rados/valgrind-leaks/{1-start 2-inject-leak/none centos_latest} | 1 | |
Failure Reason:
valgrind error: Leak_StillReachable operator new[](unsigned long) UnknownInlinedFun UnknownInlinedFun |
||||||||||||||
fail | 7686203 | 2024-05-02 19:09:28 | 2024-05-04 10:55:58 | 2024-05-04 11:31:28 | 0:35:30 | 0:28:10 | 0:07:20 | smithi | main | centos | 9.stream | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-snappy rados tasks/mon_recovery validater/valgrind} | 2 | |
Failure Reason:
valgrind error: Leak_StillReachable operator new[](unsigned long) UnknownInlinedFun UnknownInlinedFun |
||||||||||||||
fail | 7686204 | 2024-05-02 19:09:29 | 2024-05-04 10:55:58 | 2024-05-04 11:08:45 | 0:12:47 | 0:06:06 | 0:06:41 | smithi | main | centos | 9.stream | rados/cephadm/workunits/{0-distro/centos_9.stream_runc agent/off mon_election/classic task/test_iscsi_container/{centos_9.stream test_iscsi_container}} | 1 | |
Failure Reason:
Command failed on smithi135 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:09dbd6bba52d19ba471c33bc15c008e6ad158ea6 pull' |
||||||||||||||
fail | 7686205 | 2024-05-02 19:09:30 | 2024-05-04 10:55:59 | 2024-05-04 11:15:21 | 0:19:22 | 0:09:19 | 0:10:03 | smithi | main | centos | 8.stream | rados/thrash-old-clients/{0-distro$/{centos_8.stream_container_tools} 0-size-min-size-overrides/2-size-2-min-size 1-install/nautilus backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/crush-compat mon_election/classic msgr-failures/fastclose rados thrashers/none thrashosds-health workloads/snaps-few-objects} | 3 | |
Failure Reason:
Command failed on smithi018 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:09dbd6bba52d19ba471c33bc15c008e6ad158ea6 pull' |
||||||||||||||
pass | 7686206 | 2024-05-02 19:09:31 | 2024-05-04 10:56:19 | 2024-05-04 11:20:01 | 0:23:42 | 0:13:30 | 0:10:12 | smithi | main | ubuntu | 22.04 | rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/classic msgr-failures/osd-dispatch-delay objectstore/bluestore-comp-zlib rados recovery-overrides/{more-async-recovery} supported-random-distro$/{ubuntu_latest} thrashers/careful thrashosds-health workloads/ec-rados-plugin=lrc-k=4-m=2-l=3} | 3 | |
fail | 7686207 | 2024-05-02 19:09:32 | 2024-05-04 10:56:20 | 2024-05-04 11:06:43 | 0:10:23 | 0:04:21 | 0:06:02 | smithi | main | centos | 9.stream | rados/cephadm/osds/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-ops/repave-all} | 2 | |
Failure Reason:
Command failed on smithi026 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:09dbd6bba52d19ba471c33bc15c008e6ad158ea6 pull' |
||||||||||||||
fail | 7686208 | 2024-05-02 19:09:33 | 2024-05-04 10:56:20 | 2024-05-04 11:03:56 | 0:07:36 | smithi | main | ubuntu | 22.04 | rados/cephadm/workunits/{0-distro/ubuntu_22.04 agent/on mon_election/connectivity task/test_monitoring_stack_basic} | 3 | |||
Failure Reason:
Command failed on smithi045 with status 100: 'sudo apt-get clean' |
||||||||||||||
dead | 7686209 | 2024-05-02 19:09:34 | 2024-05-04 10:56:20 | 2024-05-04 11:19:42 | 0:23:22 | smithi | main | ubuntu | 22.04 | rados/singleton-bluestore/{all/cephtool mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{ubuntu_latest}} | 1 | |||
Failure Reason:
SSH connection to smithi045 was lost: 'sudo DEBIAN_FRONTEND=noninteractive apt-get -y install linux-image-generic' |
||||||||||||||
fail | 7686210 | 2024-05-02 19:09:35 | 2024-05-04 10:56:21 | 2024-05-04 11:14:00 | 0:17:39 | 0:07:21 | 0:10:18 | smithi | main | ubuntu | 22.04 | rados/cephadm/osds/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-ops/rm-zap-add} | 2 | |
Failure Reason:
Command failed on smithi084 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:09dbd6bba52d19ba471c33bc15c008e6ad158ea6 pull' |
||||||||||||||
fail | 7686211 | 2024-05-02 19:09:36 | 2024-05-04 10:56:21 | 2024-05-04 11:16:55 | 0:20:34 | 0:08:51 | 0:11:43 | smithi | main | centos | 8.stream | rados/thrash-old-clients/{0-distro$/{centos_8.stream_container_tools} 0-size-min-size-overrides/3-size-2-min-size 1-install/octopus backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/on mon_election/connectivity msgr-failures/few rados thrashers/pggrow thrashosds-health workloads/test_rbd_api} | 3 | |
Failure Reason:
Command failed on smithi078 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:09dbd6bba52d19ba471c33bc15c008e6ad158ea6 pull' |
||||||||||||||
dead | 7686212 | 2024-05-02 19:09:37 | 2024-05-04 10:57:42 | 2024-05-04 23:08:11 | 12:10:29 | smithi | main | ubuntu | 22.04 | rados/singleton-nomsgr/{all/admin_socket_output mon_election/connectivity rados supported-random-distro$/{ubuntu_latest}} | 1 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
dead | 7686213 | 2024-05-02 19:09:38 | 2024-05-04 10:57:42 | 2024-05-04 23:08:04 | 12:10:22 | smithi | main | centos | 9.stream | rados/upgrade/parallel/{0-random-distro$/{centos_9.stream} 0-start 1-tasks mon_election/connectivity upgrade-sequence workload/{ec-rados-default rados_api rados_loadgenbig rbd_import_export test_rbd_api test_rbd_python}} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 7686214 | 2024-05-02 19:09:39 | 2024-05-04 10:58:13 | 2024-05-04 12:38:04 | 1:39:51 | 1:29:30 | 0:10:21 | smithi | main | ubuntu | 22.04 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-5} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/on mon_election/classic msgr-failures/osd-delay msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{ubuntu_latest} thrashers/mapgap thrashosds-health workloads/radosbench} | 2 | |
fail | 7686215 | 2024-05-02 19:09:41 | 2024-05-04 10:58:13 | 2024-05-04 11:21:42 | 0:23:29 | 0:06:02 | 0:17:27 | smithi | main | centos | 9.stream | rados/cephadm/workunits/{0-distro/centos_9.stream agent/off mon_election/classic task/test_rgw_multisite} | 3 | |
Failure Reason:
SSH connection to smithi002 was lost: 'sudo yum -y install ceph-radosgw' |
||||||||||||||
dead | 7686216 | 2024-05-02 19:09:41 | 2024-05-04 11:06:35 | 2024-05-04 11:17:53 | 0:11:18 | 0:03:46 | 0:07:32 | smithi | main | centos | 9.stream | rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/normal mon_election/connectivity msgr-failures/osd-dispatch-delay objectstore/bluestore-bitmap rados recovery-overrides/{default} supported-random-distro$/{centos_latest} thrashers/fastread thrashosds-health workloads/ec-small-objects-fast-read} | 2 | |
Failure Reason:
['Failed to manage policy for boolean nagios_run_sudo: [Errno 11] Resource temporarily unavailable'] |
||||||||||||||
pass | 7686217 | 2024-05-02 19:09:42 | 2024-05-04 11:06:36 | 2024-05-04 11:45:11 | 0:38:35 | 0:28:10 | 0:10:25 | smithi | main | ubuntu | 22.04 | rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/osd-dispatch-delay objectstore/bluestore-comp-zstd rados recovery-overrides/{more-async-recovery} supported-random-distro$/{ubuntu_latest} thrashers/careful thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} | 2 | |
fail | 7686218 | 2024-05-02 19:09:44 | 2024-05-04 11:06:36 | 2024-05-04 12:58:03 | 1:51:27 | 1:43:47 | 0:07:40 | smithi | main | centos | 9.stream | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-comp-zstd rados tasks/rados_cls_all validater/valgrind} | 2 | |
Failure Reason:
valgrind error: Leak_StillReachable operator new[](unsigned long) UnknownInlinedFun UnknownInlinedFun |
||||||||||||||
fail | 7686219 | 2024-05-02 19:09:44 | 2024-05-04 11:06:37 | 2024-05-04 11:28:12 | 0:21:35 | 0:07:22 | 0:14:13 | smithi | main | ubuntu | 22.04 | rados/cephadm/smoke/{0-distro/ubuntu_22.04 0-nvme-loop agent/off fixed-2 mon_election/classic start} | 2 | |
Failure Reason:
Command failed on smithi132 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:09dbd6bba52d19ba471c33bc15c008e6ad158ea6 pull' |
||||||||||||||
fail | 7686220 | 2024-05-02 19:09:46 | 2024-05-04 11:10:18 | 2024-05-04 11:58:15 | 0:47:57 | 0:40:59 | 0:06:58 | smithi | main | centos | 9.stream | rados/standalone/{supported-random-distro$/{centos_latest} workloads/mon} | 1 | |
Failure Reason:
Command failed (workunit test mon/osd-erasure-code-profile.sh) on smithi186 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=09dbd6bba52d19ba471c33bc15c008e6ad158ea6 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/standalone/mon/osd-erasure-code-profile.sh' |
||||||||||||||
fail | 7686221 | 2024-05-02 19:09:47 | 2024-05-04 11:10:58 | 2024-05-04 11:22:40 | 0:11:42 | 0:03:51 | 0:07:51 | smithi | main | centos | 9.stream | rados/cephadm/smoke-singlehost/{0-random-distro$/{centos_9.stream} 1-start 2-services/rgw 3-final} | 1 | |
Failure Reason:
Command failed on smithi111 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:09dbd6bba52d19ba471c33bc15c008e6ad158ea6 pull' |
||||||||||||||
fail | 7686222 | 2024-05-02 19:09:48 | 2024-05-04 11:10:59 | 2024-05-04 11:31:33 | 0:20:34 | 0:09:04 | 0:11:30 | smithi | main | centos | 8.stream | rados/thrash-old-clients/{0-distro$/{centos_8.stream_container_tools} 0-size-min-size-overrides/2-size-2-min-size 1-install/pacific backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/crush-compat mon_election/classic msgr-failures/osd-delay rados thrashers/careful thrashosds-health workloads/cache-snaps} | 3 | |
Failure Reason:
Command failed on smithi022 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:09dbd6bba52d19ba471c33bc15c008e6ad158ea6 pull' |
||||||||||||||
pass | 7686223 | 2024-05-02 19:09:49 | 2024-05-04 11:11:29 | 2024-05-04 11:41:12 | 0:29:43 | 0:21:17 | 0:08:26 | smithi | main | centos | 9.stream | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{default} 3-scrub-overrides/{max-simultaneous-scrubs-5} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/read mon_election/connectivity msgr-failures/osd-dispatch-delay msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{centos_latest} thrashers/default thrashosds-health workloads/small-objects-balanced} | 2 | |
pass | 7686224 | 2024-05-02 19:09:50 | 2024-05-04 11:11:50 | 2024-05-04 11:37:12 | 0:25:22 | 0:19:28 | 0:05:54 | smithi | main | centos | 9.stream | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-1} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/upmap-read mon_election/classic msgr-failures/fastclose msgr/async objectstore/bluestore-bitmap rados supported-random-distro$/{centos_latest} thrashers/mapgap thrashosds-health workloads/small-objects-localized} | 2 | |
fail | 7686225 | 2024-05-02 19:09:51 | 2024-05-04 11:11:50 | 2024-05-04 11:27:46 | 0:15:56 | 0:08:24 | 0:07:32 | smithi | main | centos | 9.stream | rados/cephadm/workunits/{0-distro/centos_9.stream_runc agent/on mon_election/connectivity task/test_set_mon_crush_locations} | 3 | |
Failure Reason:
Command failed on smithi059 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:09dbd6bba52d19ba471c33bc15c008e6ad158ea6 pull' |
||||||||||||||
fail | 7686226 | 2024-05-02 19:09:52 | 2024-05-04 11:11:51 | 2024-05-04 11:36:47 | 0:24:56 | 0:18:58 | 0:05:58 | smithi | main | centos | 9.stream | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/crush-compat mon_election/connectivity msgr-failures/few msgr/async-v1only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{centos_latest} thrashers/morepggrow thrashosds-health workloads/small-objects} | 2 | |
Failure Reason:
"2024-05-04T11:34:36.275120+0000 osd.0 (osd.0) 253 : cluster [ERR] osd.0 pg[3.49]: reservation requested while still reserved" in cluster log |
||||||||||||||
fail | 7686227 | 2024-05-02 19:09:53 | 2024-05-04 11:11:51 | 2024-05-04 11:23:21 | 0:11:30 | 0:04:01 | 0:07:29 | smithi | main | centos | 9.stream | rados/cephadm/osds/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-ops/rm-zap-flag} | 2 | |
Failure Reason:
Command failed on smithi094 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:09dbd6bba52d19ba471c33bc15c008e6ad158ea6 pull' |
||||||||||||||
pass | 7686228 | 2024-05-02 19:09:54 | 2024-05-04 11:11:52 | 2024-05-04 11:52:29 | 0:40:37 | 0:28:50 | 0:11:47 | smithi | main | ubuntu | 22.04 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-5} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/read mon_election/connectivity msgr-failures/osd-dispatch-delay msgr/async objectstore/bluestore-comp-zlib rados supported-random-distro$/{ubuntu_latest} thrashers/pggrow thrashosds-health workloads/snaps-few-objects-localized} | 2 | |
fail | 7686229 | 2024-05-02 19:09:55 | 2024-05-04 11:11:52 | 2024-05-04 11:44:39 | 0:32:47 | 0:25:33 | 0:07:14 | smithi | main | centos | 9.stream | rados/singleton-bluestore/{all/cephtool mon_election/classic msgr-failures/many msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{centos_latest}} | 1 | |
Failure Reason:
Command failed (workunit test cephtool/test.sh) on smithi019 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=09dbd6bba52d19ba471c33bc15c008e6ad158ea6 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh' |
||||||||||||||
pass | 7686230 | 2024-05-02 19:09:56 | 2024-05-04 11:11:52 | 2024-05-04 11:52:02 | 0:40:10 | 0:29:06 | 0:11:04 | smithi | main | ubuntu | 22.04 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/upmap-read mon_election/classic msgr-failures/fastclose msgr/async-v1only objectstore/bluestore-comp-zstd rados supported-random-distro$/{ubuntu_latest} thrashers/careful thrashosds-health workloads/snaps-few-objects} | 2 | |
fail | 7686231 | 2024-05-02 19:09:57 | 2024-05-04 11:11:53 | 2024-05-04 11:25:14 | 0:13:21 | 0:06:16 | 0:07:05 | smithi | main | centos | 9.stream | rados/cephadm/workunits/{0-distro/centos_9.stream_runc agent/off mon_election/classic task/test_ca_signed_key} | 2 | |
Failure Reason:
Command failed on smithi086 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:09dbd6bba52d19ba471c33bc15c008e6ad158ea6 pull' |
||||||||||||||
fail | 7686232 | 2024-05-02 19:09:58 | 2024-05-04 11:11:53 | 2024-05-04 11:31:26 | 0:19:33 | 0:09:20 | 0:10:13 | smithi | main | centos | 8.stream | rados/thrash-old-clients/{0-distro$/{centos_8.stream_container_tools} 0-size-min-size-overrides/3-size-2-min-size 1-install/quincy backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/on mon_election/connectivity msgr-failures/fastclose rados thrashers/default thrashosds-health workloads/radosbench} | 3 | |
Failure Reason:
Command failed on smithi026 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:09dbd6bba52d19ba471c33bc15c008e6ad158ea6 pull' |
||||||||||||||
fail | 7686233 | 2024-05-02 19:09:59 | 2024-05-04 11:11:54 | 2024-05-04 11:26:49 | 0:14:55 | 0:08:39 | 0:06:16 | smithi | main | centos | 9.stream | rados/dashboard/{0-single-container-host debug/mgr mon_election/classic random-objectstore$/{bluestore-bitmap} tasks/e2e} | 2 | |
Failure Reason:
Command failed on smithi053 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:09dbd6bba52d19ba471c33bc15c008e6ad158ea6 pull' |
||||||||||||||
fail | 7686234 | 2024-05-02 19:10:00 | 2024-05-04 11:11:54 | 2024-05-04 11:50:34 | 0:38:40 | 0:26:32 | 0:12:08 | smithi | main | ubuntu | 22.04 | rados/thrash-erasure-code-overwrites/{bluestore-bitmap ceph clusters/{fixed-2 openstack} fast/normal mon_election/connectivity msgr-failures/osd-delay rados recovery-overrides/{more-active-recovery} supported-random-distro$/{ubuntu_latest} thrashers/minsize_recovery thrashosds-health workloads/ec-snaps-few-objects-overwrites} | 2 | |
Failure Reason:
"2024-05-04T11:38:43.118455+0000 osd.5 (osd.5) 37 : cluster [ERR] osd.5 pg[1.3s2]: reservation requested while still reserved" in cluster log |
||||||||||||||
dead | 7686235 | 2024-05-02 19:10:01 | 2024-05-04 11:11:55 | 2024-05-04 23:21:57 | 12:10:02 | smithi | main | centos | 9.stream | rados/cephadm/osds/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-ops/rm-zap-wait} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 7686236 | 2024-05-02 19:10:02 | 2024-05-04 11:11:55 | 2024-05-04 12:11:48 | 0:59:53 | 0:52:21 | 0:07:32 | smithi | main | centos | 9.stream | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-low-osd-mem-target rados tasks/rados_api_tests validater/valgrind} | 2 | |
Failure Reason:
valgrind error: Leak_StillReachable operator new[](unsigned long) UnknownInlinedFun UnknownInlinedFun |
||||||||||||||
fail | 7686237 | 2024-05-02 19:10:03 | 2024-05-04 11:11:55 | 2024-05-04 12:03:11 | 0:51:16 | 0:39:45 | 0:11:31 | smithi | main | centos | 9.stream | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/read mon_election/connectivity msgr-failures/osd-dispatch-delay msgr/async-v1only objectstore/bluestore-stupid rados supported-random-distro$/{centos_latest} thrashers/morepggrow thrashosds-health workloads/cache-agent-big} | 2 | |
Failure Reason:
"2024-05-04T11:33:02.638745+0000 osd.4 (osd.4) 9 : cluster [ERR] osd.4 pg[3.2s3]: reservation requested while still reserved" in cluster log |
||||||||||||||
fail | 7686238 | 2024-05-02 19:10:04 | 2024-05-04 11:15:26 | 2024-05-04 11:33:15 | 0:17:49 | 0:07:12 | 0:10:37 | smithi | main | centos | 9.stream | rados/singleton-nomsgr/{all/lazy_omap_stats_output mon_election/classic rados supported-random-distro$/{centos_latest}} | 1 | |
Failure Reason:
Command crashed: 'sudo TESTDIR=/home/ubuntu/cephtest bash -c ceph_test_lazy_omap_stats' |
||||||||||||||
fail | 7686239 | 2024-05-02 19:10:05 | 2024-05-04 11:18:27 | 2024-05-04 11:31:16 | 0:12:49 | 0:04:02 | 0:08:47 | smithi | main | centos | 9.stream | rados/cephadm/smoke/{0-distro/centos_9.stream 0-nvme-loop agent/on fixed-2 mon_election/connectivity start} | 2 | |
Failure Reason:
Command failed on smithi007 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:09dbd6bba52d19ba471c33bc15c008e6ad158ea6 pull' |