User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Pass | Fail | Dead |
---|---|---|---|---|---|---|---|---|---|---|---|
lflores | 2024-04-29 19:49:34 | 2024-04-29 19:52:04 | 2024-04-30 08:08:55 | 12:16:51 | rados | wip-yuri6-testing-2024-04-02-1310 | smithi | 354447c | 6 | 15 | 2 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
fail | 7678683 | 2024-04-29 19:51:04 | 2024-04-29 19:52:04 | 2024-04-29 21:31:37 | 1:39:33 | 1:33:10 | 0:06:23 | smithi | main | centos | 9.stream | rados/standalone/{supported-random-distro$/{centos_latest} workloads/scrub} | 1 | |
Failure Reason:
Command failed (workunit test scrub/osd-scrub-repair.sh) on smithi203 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=354447ca5357e926795d009be846687f04556beb TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/standalone/scrub/osd-scrub-repair.sh' |
||||||||||||||
pass | 7678684 | 2024-04-29 19:51:06 | 2024-04-29 19:52:05 | 2024-04-29 20:22:55 | 0:30:50 | 0:20:21 | 0:10:29 | smithi | main | ubuntu | 22.04 | rados/cephadm/osds/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-ops/rm-zap-wait} | 2 | |
fail | 7678685 | 2024-04-29 19:51:07 | 2024-04-29 19:53:15 | 2024-04-29 20:14:34 | 0:21:19 | 0:14:02 | 0:07:17 | smithi | main | centos | 9.stream | rados/cephadm/osds/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-ops/rmdir-reactivate} | 2 | |
Failure Reason:
"2024-04-29T20:10:38.228053+0000 mon.smithi149 (mon.0) 814 : cluster [WRN] Health check failed: 1 stray daemon(s) not managed by cephadm (CEPHADM_STRAY_DAEMON)" in cluster log |
||||||||||||||
fail | 7678686 | 2024-04-29 19:51:08 | 2024-04-29 19:53:26 | 2024-04-29 20:44:28 | 0:51:02 | 0:34:58 | 0:16:04 | smithi | main | centos | 8.stream | rados/thrash-old-clients/{0-distro$/{centos_8.stream_container_tools} 0-size-min-size-overrides/3-size-2-min-size 1-install/reef backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/on mon_election/connectivity msgr-failures/fastclose rados thrashers/default thrashosds-health workloads/cache-snaps} | 3 | |
Failure Reason:
"2024-04-29T20:30:00.000095+0000 mon.a (mon.0) 1197 : cluster [WRN] Health detail: HEALTH_WARN noscrub flag(s) set" in cluster log |
||||||||||||||
dead | 7678687 | 2024-04-29 19:51:09 | 2024-04-29 19:58:17 | 2024-04-30 08:08:55 | 12:10:38 | smithi | main | ubuntu | 22.04 | rados/singleton-nomsgr/{all/admin_socket_output mon_election/classic rados supported-random-distro$/{ubuntu_latest}} | 1 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 7678688 | 2024-04-29 19:51:11 | 2024-04-29 19:58:17 | 2024-04-29 21:53:13 | 1:54:56 | 1:47:53 | 0:07:03 | smithi | main | centos | 9.stream | rados/upgrade/parallel/{0-random-distro$/{centos_9.stream} 0-start 1-tasks mon_election/classic upgrade-sequence workload/{ec-rados-default rados_api rados_loadgenbig rbd_import_export test_rbd_api test_rbd_python}} | 2 | |
Failure Reason:
"2024-04-29T20:23:46.087624+0000 mon.a (mon.0) 356 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)" in cluster log |
||||||||||||||
pass | 7678689 | 2024-04-29 19:51:12 | 2024-04-29 19:58:17 | 2024-04-29 20:19:09 | 0:20:52 | 0:14:03 | 0:06:49 | smithi | main | centos | 9.stream | rados/cephadm/smoke/{0-distro/centos_9.stream 0-nvme-loop agent/off fixed-2 mon_election/classic start} | 2 | |
pass | 7678690 | 2024-04-29 19:51:13 | 2024-04-29 19:58:18 | 2024-04-29 20:17:27 | 0:19:09 | 0:12:44 | 0:06:25 | smithi | main | centos | 9.stream | rados/cephadm/osds/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-ops/repave-all} | 2 | |
fail | 7678691 | 2024-04-29 19:51:15 | 2024-04-29 19:58:18 | 2024-04-29 21:22:31 | 1:24:13 | 1:14:01 | 0:10:12 | smithi | main | centos | 8.stream | rados/thrash-old-clients/{0-distro$/{centos_8.stream_container_tools} 0-size-min-size-overrides/2-size-2-min-size 1-install/nautilus-v1only backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/crush-compat mon_election/classic msgr-failures/few rados thrashers/mapgap thrashosds-health workloads/radosbench} | 3 | |
Failure Reason:
"2024-04-29T20:30:00.000112+0000 mon.a (mon.0) 1104 : cluster [WRN] Health detail: HEALTH_WARN noscrub,nodeep-scrub flag(s) set" in cluster log |
||||||||||||||
fail | 7678692 | 2024-04-29 19:51:16 | 2024-04-29 19:58:28 | 2024-04-29 20:56:18 | 0:57:50 | 0:49:47 | 0:08:03 | smithi | main | centos | 9.stream | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-bitmap rados tasks/rados_api_tests validater/valgrind} | 2 | |
Failure Reason:
valgrind error: Leak_StillReachable operator new[](unsigned long) UnknownInlinedFun UnknownInlinedFun |
||||||||||||||
fail | 7678693 | 2024-04-29 19:51:17 | 2024-04-29 19:58:29 | 2024-04-29 20:19:50 | 0:21:21 | 0:15:02 | 0:06:19 | smithi | main | centos | 9.stream | rados/cephadm/smoke/{0-distro/centos_9.stream_runc 0-nvme-loop agent/on fixed-2 mon_election/connectivity start} | 2 | |
Failure Reason:
"2024-04-29T20:13:56.095048+0000 mon.a (mon.0) 687 : cluster [WRN] Health check failed: 1 stray daemon(s) not managed by cephadm (CEPHADM_STRAY_DAEMON)" in cluster log |
||||||||||||||
pass | 7678694 | 2024-04-29 19:51:18 | 2024-04-29 19:58:29 | 2024-04-29 20:17:25 | 0:18:56 | 0:13:01 | 0:05:55 | smithi | main | centos | 9.stream | rados/cephadm/osds/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-ops/rm-zap-flag} | 2 | |
fail | 7678695 | 2024-04-29 19:51:20 | 2024-04-29 19:58:29 | 2024-04-29 20:39:05 | 0:40:36 | 0:28:30 | 0:12:06 | smithi | main | ubuntu | 22.04 | rados/singleton-bluestore/{all/cephtool mon_election/classic msgr-failures/none msgr/async-v1only objectstore/bluestore-bitmap rados supported-random-distro$/{ubuntu_latest}} | 1 | |
Failure Reason:
Command failed (workunit test cephtool/test.sh) on smithi066 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=354447ca5357e926795d009be846687f04556beb TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh' |
||||||||||||||
pass | 7678696 | 2024-04-29 19:51:21 | 2024-04-29 19:58:30 | 2024-04-29 20:17:52 | 0:19:22 | 0:12:33 | 0:06:49 | smithi | main | centos | 9.stream | rados/cephadm/osds/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-ops/rm-zap-wait} | 2 | |
pass | 7678697 | 2024-04-29 19:51:22 | 2024-04-29 19:58:30 | 2024-04-29 20:37:36 | 0:39:06 | 0:26:54 | 0:12:12 | smithi | main | ubuntu | 22.04 | rados/cephadm/smoke/{0-distro/ubuntu_22.04 0-nvme-loop agent/off fixed-2 mon_election/classic start} | 2 | |
fail | 7678698 | 2024-04-29 19:51:24 | 2024-04-29 19:58:31 | 2024-04-29 20:30:57 | 0:32:26 | 0:22:06 | 0:10:20 | smithi | main | ubuntu | 22.04 | rados/cephadm/osds/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-ops/rmdir-reactivate} | 2 | |
Failure Reason:
"2024-04-29T20:26:59.982933+0000 mon.smithi089 (mon.0) 804 : cluster [WRN] Health check failed: 1 stray daemon(s) not managed by cephadm (CEPHADM_STRAY_DAEMON)" in cluster log |
||||||||||||||
fail | 7678699 | 2024-04-29 19:51:25 | 2024-04-29 19:58:31 | 2024-04-29 20:25:17 | 0:26:46 | 0:19:55 | 0:06:51 | smithi | main | centos | 9.stream | rados/valgrind-leaks/{1-start 2-inject-leak/none centos_latest} | 1 | |
Failure Reason:
valgrind error: Leak_StillReachable operator new[](unsigned long) UnknownInlinedFun UnknownInlinedFun |
||||||||||||||
fail | 7678700 | 2024-04-29 19:51:26 | 2024-04-29 19:58:31 | 2024-04-29 20:37:16 | 0:38:45 | 0:32:23 | 0:06:22 | smithi | main | centos | 9.stream | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-snappy rados tasks/mon_recovery validater/valgrind} | 2 | |
Failure Reason:
valgrind error: Leak_StillReachable operator new[](unsigned long) UnknownInlinedFun UnknownInlinedFun |
||||||||||||||
fail | 7678701 | 2024-04-29 19:51:27 | 2024-04-29 19:58:32 | 2024-04-29 20:20:32 | 0:22:00 | 0:14:17 | 0:07:43 | smithi | main | centos | 9.stream | rados/cephadm/smoke/{0-distro/centos_9.stream 0-nvme-loop agent/on fixed-2 mon_election/connectivity start} | 2 | |
Failure Reason:
"2024-04-29T20:14:08.906445+0000 mon.a (mon.0) 665 : cluster [WRN] Health check failed: 1 stray daemon(s) not managed by cephadm (CEPHADM_STRAY_DAEMON)" in cluster log |
||||||||||||||
fail | 7678702 | 2024-04-29 19:51:29 | 2024-04-29 19:58:32 | 2024-04-29 20:36:34 | 0:38:02 | 0:28:34 | 0:09:28 | smithi | main | ubuntu | 22.04 | rados/singleton-bluestore/{all/cephtool mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{ubuntu_latest}} | 1 | |
Failure Reason:
Command failed (workunit test cephtool/test.sh) on smithi122 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=354447ca5357e926795d009be846687f04556beb TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh' |
||||||||||||||
dead | 7678703 | 2024-04-29 19:51:30 | 2024-04-29 19:58:32 | 2024-04-30 08:08:18 | 12:09:46 | smithi | main | centos | 9.stream | rados/singleton-nomsgr/{all/admin_socket_output mon_election/connectivity rados supported-random-distro$/{centos_latest}} | 1 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 7678704 | 2024-04-29 19:51:31 | 2024-04-29 19:58:33 | 2024-04-29 21:52:20 | 1:53:47 | 1:46:25 | 0:07:22 | smithi | main | centos | 9.stream | rados/upgrade/parallel/{0-random-distro$/{centos_9.stream_runc} 0-start 1-tasks mon_election/connectivity upgrade-sequence workload/{ec-rados-default rados_api rados_loadgenbig rbd_import_export test_rbd_api test_rbd_python}} | 2 | |
Failure Reason:
"2024-04-29T20:24:29.984506+0000 mon.a (mon.0) 358 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)" in cluster log |
||||||||||||||
fail | 7678705 | 2024-04-29 19:51:33 | 2024-04-29 19:58:33 | 2024-04-29 21:56:27 | 1:57:54 | 1:50:05 | 0:07:49 | smithi | main | centos | 9.stream | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-comp-zstd rados tasks/rados_cls_all validater/valgrind} | 2 | |
Failure Reason:
valgrind error: Leak_StillReachable operator new[](unsigned long) UnknownInlinedFun UnknownInlinedFun |