User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Pass | Fail |
---|---|---|---|---|---|---|---|---|---|---|
yuriw | 2023-11-08 21:46:10 | 2023-11-08 21:48:50 | 2023-11-09 04:30:22 | 6:41:32 | rados | reef-release | smithi | 55e3239 | 3 | 9 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
fail | 7452120 | 2023-11-08 21:47:31 | 2023-11-08 21:48:50 | 2023-11-09 00:06:17 | 2:17:27 | 2:08:10 | 0:09:17 | smithi | main | centos | 9.stream | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-low-osd-mem-target rados tasks/rados_cls_all validater/valgrind} | 2 | |
Failure Reason:
"2023-11-08T22:59:28.289512+0000 mon.a (mon.0) 580 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)" in cluster log |
||||||||||||||
fail | 7452121 | 2023-11-08 21:47:32 | 2023-11-08 21:48:50 | 2023-11-08 22:55:59 | 1:07:09 | 0:57:26 | 0:09:43 | smithi | main | centos | 9.stream | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-bitmap rados tasks/rados_api_tests validater/valgrind} | 2 | |
Failure Reason:
"2023-11-08T22:23:09.961815+0000 osd.2 (osd.2) 12 : cluster [WRN] osd.2 ep: 168 scrubber::ReplicaReservations pg[12.2]: timeout on replica reservations (since 2023-11-08 22:23:04)" in cluster log |
||||||||||||||
fail | 7452122 | 2023-11-08 21:47:33 | 2023-11-08 21:49:00 | 2023-11-08 23:41:37 | 1:52:37 | 1:10:21 | 0:42:16 | smithi | main | centos | 8.stream | rados/dashboard/{0-single-container-host debug/mgr mon_election/connectivity random-objectstore$/{bluestore-bitmap} tasks/e2e} | 2 | |
Failure Reason:
Command failed (workunit test cephadm/test_dashboard_e2e.sh) on smithi012 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=55e3239498650453ff76a9b06a37f1a6f488c8fd TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_dashboard_e2e.sh' |
||||||||||||||
pass | 7452123 | 2023-11-08 21:47:34 | 2023-11-08 21:50:31 | 2023-11-09 00:04:32 | 2:14:01 | 2:03:22 | 0:10:39 | smithi | main | centos | 9.stream | rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/normal mon_election/connectivity msgr-failures/few objectstore/bluestore-low-osd-mem-target rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{centos_latest} thrashers/careful thrashosds-health workloads/ec-radosbench} | 2 | |
fail | 7452124 | 2023-11-08 21:47:35 | 2023-11-08 21:50:41 | 2023-11-08 22:25:50 | 0:35:09 | 0:24:57 | 0:10:12 | smithi | main | centos | 9.stream | rados/valgrind-leaks/{1-start 2-inject-leak/none centos_latest} | 1 | |
Failure Reason:
saw valgrind issues |
||||||||||||||
fail | 7452125 | 2023-11-08 21:47:35 | 2023-11-08 21:50:52 | 2023-11-08 22:33:07 | 0:42:15 | 0:32:07 | 0:10:08 | smithi | main | centos | 9.stream | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-snappy rados tasks/mon_recovery validater/valgrind} | 2 | |
Failure Reason:
saw valgrind issues |
||||||||||||||
pass | 7452126 | 2023-11-08 21:47:36 | 2023-11-08 21:52:12 | 2023-11-08 22:27:06 | 0:34:54 | 0:25:51 | 0:09:03 | smithi | main | centos | 9.stream | rados/objectstore/{backends/objectcacher-stress supported-random-distro$/{centos_latest}} | 1 | |
fail | 7452127 | 2023-11-08 21:47:37 | 2023-11-08 21:52:13 | 2023-11-09 00:22:01 | 2:29:48 | 2:20:23 | 0:09:25 | smithi | main | centos | 9.stream | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-comp-zstd rados tasks/rados_cls_all validater/valgrind} | 2 | |
Failure Reason:
"2023-11-08T22:29:29.604826+0000 mon.a (mon.0) 356 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)" in cluster log |
||||||||||||||
pass | 7452128 | 2023-11-08 21:47:38 | 2023-11-08 21:52:43 | 2023-11-08 22:28:32 | 0:35:49 | 0:26:38 | 0:09:11 | smithi | main | centos | 9.stream | rados/valgrind-leaks/{1-start 2-inject-leak/osd centos_latest} | 1 | |
fail | 7452129 | 2023-11-08 21:47:39 | 2023-11-08 21:53:23 | 2023-11-08 23:43:38 | 1:50:15 | 1:08:07 | 0:42:08 | smithi | main | centos | 8.stream | rados/dashboard/{0-single-container-host debug/mgr mon_election/classic random-objectstore$/{bluestore-hybrid} tasks/e2e} | 2 | |
Failure Reason:
Command failed (workunit test cephadm/test_dashboard_e2e.sh) on smithi080 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=55e3239498650453ff76a9b06a37f1a6f488c8fd TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_dashboard_e2e.sh' |
||||||||||||||
fail | 7452130 | 2023-11-08 21:47:40 | 2023-11-08 21:54:14 | 2023-11-08 22:13:12 | 0:18:58 | 0:09:47 | 0:09:11 | smithi | main | centos | 9.stream | rados/singleton-nomsgr/{all/lazy_omap_stats_output mon_election/connectivity rados supported-random-distro$/{centos_latest}} | 1 | |
Failure Reason:
Command crashed: 'sudo TESTDIR=/home/ubuntu/cephtest bash -c ceph_test_lazy_omap_stats' |
||||||||||||||
fail | 7452131 | 2023-11-08 21:47:41 | 2023-11-08 21:54:14 | 2023-11-09 04:30:22 | 6:36:08 | 6:26:59 | 0:09:09 | smithi | main | centos | 9.stream | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-low-osd-mem-target rados tasks/rados_api_tests validater/valgrind} | 2 | |
Failure Reason:
Command failed (workunit test rados/test.sh) on smithi029 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=55e3239498650453ff76a9b06a37f1a6f488c8fd TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 ALLOW_TIMEOUTS=1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test.sh' |