User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Pass | Fail | Dead |
---|---|---|---|---|---|---|---|---|---|---|---|
yuriw | 2023-07-22 15:03:06 | 2023-07-22 15:42:06 | 2023-07-23 05:01:59 | 13:19:53 | rados | wip-yuri10-testing-2023-07-21-0828-reef | smithi | 1bf364b | 15 | 8 | 1 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
fail | 7348050 | 2023-07-22 15:04:21 | 2023-07-22 15:42:06 | 2023-07-22 18:17:17 | 2:35:11 | 2:25:16 | 0:09:55 | smithi | main | centos | 9.stream | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-low-osd-mem-target rados tasks/rados_cls_all validater/valgrind} | 2 | |
Failure Reason:
"2023-07-22T16:25:23.537151+0000 mon.a (mon.0) 178 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)" in cluster log |
||||||||||||||
pass | 7348052 | 2023-07-22 15:04:22 | 2023-07-22 15:44:57 | 2023-07-22 18:46:22 | 3:01:25 | 2:52:08 | 0:09:17 | smithi | main | centos | 9.stream | rados/standalone/{supported-random-distro$/{centos_latest} workloads/scrub} | 1 | |
pass | 7348054 | 2023-07-22 15:04:23 | 2023-07-22 15:46:48 | 2023-07-22 16:07:23 | 0:20:35 | 0:14:51 | 0:05:44 | smithi | main | rhel | 8.6 | rados/singleton/{all/erasure-code-nonregression mon_election/connectivity msgr-failures/none msgr/async-v2only objectstore/bluestore-comp-snappy rados supported-random-distro$/{rhel_8}} | 1 | |
pass | 7348056 | 2023-07-22 15:04:24 | 2023-07-22 15:50:40 | 2023-07-22 16:18:20 | 0:27:40 | 0:16:44 | 0:10:56 | smithi | main | centos | 8.stream | rados/multimon/{clusters/9 mon_election/classic msgr-failures/few msgr/async no_pools objectstore/bluestore-stupid rados supported-random-distro$/{centos_8} tasks/mon_clock_no_skews} | 3 | |
pass | 7348058 | 2023-07-22 15:04:25 | 2023-07-22 16:28:58 | 2023-07-22 17:04:56 | 0:35:58 | 0:24:23 | 0:11:35 | smithi | main | ubuntu | 20.04 | rados/cephadm/smoke/{0-distro/ubuntu_20.04 0-nvme-loop agent/off fixed-2 mon_election/connectivity start} | 2 | |
fail | 7348060 | 2023-07-22 15:04:25 | 2023-07-22 16:30:19 | 2023-07-22 18:15:49 | 1:45:30 | 1:33:16 | 0:12:14 | smithi | main | centos | 9.stream | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-bitmap rados tasks/rados_api_tests validater/valgrind} | 2 | |
Failure Reason:
"2023-07-22T17:34:10.201991+0000 mon.a (mon.0) 784 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)" in cluster log |
||||||||||||||
fail | 7348062 | 2023-07-22 15:04:26 | 2023-07-22 16:32:50 | 2023-07-22 17:30:22 | 0:57:32 | 0:40:18 | 0:17:14 | smithi | main | centos | 8.stream | rados/dashboard/{0-single-container-host debug/mgr mon_election/connectivity random-objectstore$/{bluestore-stupid} tasks/e2e} | 2 | |
Failure Reason:
Command failed (workunit test cephadm/test_dashboard_e2e.sh) on smithi039 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=1bf364b918a7ab4708130a64bf96639942959f6d TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_dashboard_e2e.sh' |
||||||||||||||
pass | 7348064 | 2023-07-22 15:04:27 | 2023-07-22 16:39:22 | 2023-07-22 18:19:16 | 1:39:54 | 1:24:51 | 0:15:03 | smithi | main | centos | 8.stream | rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/normal mon_election/connectivity msgr-failures/few objectstore/bluestore-low-osd-mem-target rados recovery-overrides/{default} supported-random-distro$/{centos_8} thrashers/careful thrashosds-health workloads/ec-radosbench} | 2 | |
fail | 7348066 | 2023-07-22 15:04:28 | 2023-07-22 16:41:54 | 2023-07-22 17:32:19 | 0:50:25 | 0:38:31 | 0:11:54 | smithi | main | centos | 9.stream | rados/valgrind-leaks/{1-start 2-inject-leak/none centos_latest} | 1 | |
Failure Reason:
saw valgrind issues |
||||||||||||||
fail | 7348068 | 2023-07-22 15:04:29 | 2023-07-22 16:46:45 | 2023-07-22 18:02:43 | 1:15:58 | 1:02:32 | 0:13:26 | smithi | main | centos | 9.stream | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-snappy rados tasks/mon_recovery validater/valgrind} | 2 | |
Failure Reason:
Command failed on smithi192 with status 32: 'sync && sudo umount -f /var/lib/ceph/osd/ceph-6' |
||||||||||||||
pass | 7348070 | 2023-07-22 15:04:30 | 2023-07-22 16:49:47 | 2023-07-22 17:47:06 | 0:57:19 | 0:50:23 | 0:06:56 | smithi | main | rhel | 8.6 | rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/connectivity msgr-failures/osd-dispatch-delay objectstore/bluestore-stupid rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{rhel_8} thrashers/morepggrow thrashosds-health workloads/ec-rados-plugin=jerasure-k=4-m=2} | 3 | |
dead | 7348072 | 2023-07-22 15:04:30 | 2023-07-22 16:51:08 | 2023-07-23 05:01:59 | 12:10:51 | smithi | main | centos | 9.stream | rados/mgr/{clusters/{2-node-mgr} debug/mgr mgr_ttl_cache/enable mon_election/connectivity random-objectstore$/{bluestore-low-osd-mem-target} supported-random-distro$/{centos_latest} tasks/progress} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 7348074 | 2023-07-22 15:04:31 | 2023-07-22 16:52:19 | 2023-07-22 20:37:26 | 3:45:07 | 3:34:34 | 0:10:33 | smithi | main | centos | 9.stream | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-comp-zstd rados tasks/rados_cls_all validater/valgrind} | 2 | |
Failure Reason:
Command failed (workunit test cls/test_cls_2pc_queue.sh) on smithi002 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=1bf364b918a7ab4708130a64bf96639942959f6d TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_2pc_queue.sh' |
||||||||||||||
pass | 7348076 | 2023-07-22 15:04:32 | 2023-07-22 16:55:20 | 2023-07-22 17:30:57 | 0:35:37 | 0:28:30 | 0:07:07 | smithi | main | rhel | 8.6 | rados/thrash-erasure-code-overwrites/{bluestore-bitmap ceph clusters/{fixed-2 openstack} fast/fast mon_election/classic msgr-failures/few rados recovery-overrides/{default} supported-random-distro$/{rhel_8} thrashers/fastread thrashosds-health workloads/ec-small-objects-overwrites} | 2 | |
pass | 7348078 | 2023-07-22 15:04:33 | 2023-07-22 16:56:41 | 2023-07-22 17:28:05 | 0:31:24 | 0:24:35 | 0:06:49 | smithi | main | rhel | 8.6 | rados/mgr/{clusters/{2-node-mgr} debug/mgr mgr_ttl_cache/enable mon_election/connectivity random-objectstore$/{bluestore-comp-lz4} supported-random-distro$/{rhel_8} tasks/workunits} | 2 | |
pass | 7348080 | 2023-07-22 15:04:34 | 2023-07-22 16:56:42 | 2023-07-22 17:19:47 | 0:23:05 | 0:15:47 | 0:07:18 | smithi | main | rhel | 8.6 | rados/singleton-nomsgr/{all/export-after-evict mon_election/connectivity rados supported-random-distro$/{rhel_8}} | 1 | |
pass | 7348082 | 2023-07-22 15:04:35 | 2023-07-22 16:57:03 | 2023-07-22 17:26:27 | 0:29:24 | 0:19:06 | 0:10:18 | smithi | main | centos | 9.stream | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-hybrid rados tasks/mon_recovery validater/lockdep} | 2 | |
pass | 7348084 | 2023-07-22 15:04:36 | 2023-07-22 16:58:14 | 2023-07-22 17:27:27 | 0:29:13 | 0:22:57 | 0:06:16 | smithi | main | rhel | 8.6 | rados/cephadm/workunits/{0-distro/rhel_8.6_container_tools_rhel8 agent/on mon_election/connectivity task/test_cephadm} | 1 | |
pass | 7348086 | 2023-07-22 15:04:36 | 2023-07-22 16:58:15 | 2023-07-22 17:22:30 | 0:24:15 | 0:13:43 | 0:10:32 | smithi | main | ubuntu | 22.04 | rados/singleton-nomsgr/{all/full-tiering mon_election/classic rados supported-random-distro$/{ubuntu_latest}} | 1 | |
pass | 7348088 | 2023-07-22 15:04:37 | 2023-07-22 16:58:56 | 2023-07-22 19:52:12 | 2:53:16 | 2:43:58 | 0:09:18 | smithi | main | centos | 8.stream | rados/standalone/{supported-random-distro$/{centos_8} workloads/osd-backfill} | 1 | |
pass | 7348090 | 2023-07-22 15:04:38 | 2023-07-22 16:59:07 | 2023-07-22 17:20:10 | 0:21:03 | 0:10:34 | 0:10:29 | smithi | main | ubuntu | 20.04 | rados/objectstore/{backends/objectstore-bluestore-b supported-random-distro$/{ubuntu_20.04}} | 1 | |
fail | 7348092 | 2023-07-22 15:04:39 | 2023-07-22 17:01:48 | 2023-07-22 17:51:07 | 0:49:19 | 0:32:44 | 0:16:35 | smithi | main | centos | 8.stream | rados/dashboard/{0-single-container-host debug/mgr mon_election/classic random-objectstore$/{bluestore-comp-zstd} tasks/e2e} | 2 | |
Failure Reason:
Command failed (workunit test cephadm/test_dashboard_e2e.sh) on smithi017 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=1bf364b918a7ab4708130a64bf96639942959f6d TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_dashboard_e2e.sh' |
||||||||||||||
pass | 7348094 | 2023-07-22 15:04:40 | 2023-07-22 17:04:19 | 2023-07-22 17:32:17 | 0:27:58 | 0:20:51 | 0:07:07 | smithi | main | rhel | 8.6 | rados/mgr/{clusters/{2-node-mgr} debug/mgr mgr_ttl_cache/disable mon_election/connectivity random-objectstore$/{bluestore-comp-lz4} supported-random-distro$/{rhel_8} tasks/crash} | 2 | |
fail | 7348096 | 2023-07-22 15:04:41 | 2023-07-22 17:05:30 | 2023-07-22 23:46:16 | 6:40:46 | 6:30:56 | 0:09:50 | smithi | main | centos | 9.stream | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-low-osd-mem-target rados tasks/rados_api_tests validater/valgrind} | 2 | |
Failure Reason:
Command failed (workunit test rados/test.sh) on smithi096 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=1bf364b918a7ab4708130a64bf96639942959f6d TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 ALLOW_TIMEOUTS=1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test.sh' |