User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Pass | Fail |
---|---|---|---|---|---|---|---|---|---|---|
yuriw | 2024-04-11 20:33:29 | 2024-04-12 05:08:10 | 2024-04-12 07:12:26 | 2:04:16 | rados | reef-release | smithi | d540eba | 5 | 13 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
fail | 7652690 | 2024-04-11 20:34:43 | 2024-04-12 05:08:10 | 2024-04-12 05:47:13 | 0:39:03 | 0:33:17 | 0:05:46 | smithi | main | rhel | 8.6 | rados/singleton-bluestore/{all/cephtool mon_election/connectivity msgr-failures/none msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{rhel_8}} | 1 | |
Failure Reason:
Command failed (workunit test cephtool/test.sh) on smithi150 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=d540ebaca6b131a1dd560e7f69e024b133bbaa42 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh' |
||||||||||||||
fail | 7652693 | 2024-04-11 20:34:44 | 2024-04-12 05:28:02 | 560 | smithi | main | ubuntu | 20.04 | rados/cephadm/workunits/{0-distro/ubuntu_20.04 agent/on mon_election/connectivity task/test_cephadm} | 1 | ||||
Failure Reason:
Command failed (workunit test cephadm/test_cephadm.sh) on smithi078 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=d540ebaca6b131a1dd560e7f69e024b133bbaa42 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_cephadm.sh' |
||||||||||||||
pass | 7652695 | 2024-04-11 20:34:45 | 2024-04-12 05:08:10 | 2024-04-12 05:40:35 | 0:32:25 | 0:21:23 | 0:11:02 | smithi | main | ubuntu | 22.04 | rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/many msgr/async objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{ubuntu_latest} tasks/rados_cls_all} | 2 | |
pass | 7652697 | 2024-04-11 20:34:46 | 2024-04-12 05:08:51 | 2024-04-12 05:37:25 | 0:28:34 | 0:14:14 | 0:14:20 | smithi | main | ubuntu | 22.04 | rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/connectivity msgr-failures/few objectstore/bluestore-bitmap rados recovery-overrides/{default} supported-random-distro$/{ubuntu_latest} thrashers/default thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} | 4 | |
pass | 7652699 | 2024-04-11 20:34:47 | 2024-04-12 05:13:12 | 2024-04-12 05:34:27 | 0:21:15 | 0:11:12 | 0:10:03 | smithi | main | ubuntu | 22.04 | rados/singleton-nomsgr/{all/crushdiff mon_election/connectivity rados supported-random-distro$/{ubuntu_latest}} | 1 | |
fail | 7652701 | 2024-04-11 20:34:48 | 2024-04-12 05:13:12 | 2024-04-12 05:38:29 | 0:25:17 | 0:14:15 | 0:11:02 | smithi | main | centos | 9.stream | rados/singleton/{all/mon-config-keys mon_election/connectivity msgr-failures/many msgr/async-v1only objectstore/bluestore-comp-snappy rados supported-random-distro$/{centos_latest}} | 1 | |
Failure Reason:
"2024-04-12T05:31:27.837757+0000 mon.a (mon.0) 83 : cluster [WRN] Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)" in cluster log |
||||||||||||||
fail | 7652703 | 2024-04-11 20:34:49 | 2024-04-12 05:14:43 | 2024-04-12 05:54:02 | 0:39:19 | 0:33:11 | 0:06:08 | smithi | main | rhel | 8.6 | rados/singleton-bluestore/{all/cephtool mon_election/classic msgr-failures/none msgr/async-v1only objectstore/bluestore-bitmap rados supported-random-distro$/{rhel_8}} | 1 | |
Failure Reason:
Command failed (workunit test cephtool/test.sh) on smithi146 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=d540ebaca6b131a1dd560e7f69e024b133bbaa42 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh' |
||||||||||||||
fail | 7652705 | 2024-04-11 20:34:50 | 2024-04-12 05:14:44 | 2024-04-12 05:34:28 | 0:19:44 | 0:10:40 | 0:09:04 | smithi | main | ubuntu | 22.04 | rados/singleton-nomsgr/{all/lazy_omap_stats_output mon_election/classic rados supported-random-distro$/{ubuntu_latest}} | 1 | |
Failure Reason:
Command crashed: 'sudo TESTDIR=/home/ubuntu/cephtest bash -c ceph_test_lazy_omap_stats' |
||||||||||||||
fail | 7652707 | 2024-04-11 20:34:52 | 2024-04-12 05:14:44 | 2024-04-12 05:44:07 | 0:29:23 | 0:17:37 | 0:11:46 | smithi | main | centos | 9.stream | rados/valgrind-leaks/{1-start 2-inject-leak/none centos_latest} | 1 | |
Failure Reason:
saw valgrind issues |
||||||||||||||
fail | 7652709 | 2024-04-11 20:34:53 | 2024-04-12 05:15:04 | 2024-04-12 05:56:00 | 0:40:56 | 0:30:08 | 0:10:48 | smithi | main | centos | 9.stream | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-snappy rados tasks/mon_recovery validater/valgrind} | 2 | |
Failure Reason:
saw valgrind issues |
||||||||||||||
pass | 7652711 | 2024-04-11 20:34:54 | 2024-04-12 05:15:05 | 2024-04-12 05:53:18 | 0:38:13 | 0:25:24 | 0:12:49 | smithi | main | ubuntu | 20.04 | rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/osd-delay objectstore/bluestore-comp-zlib rados recovery-overrides/{more-active-recovery} supported-random-distro$/{ubuntu_20.04} thrashers/pggrow thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} | 2 | |
fail | 7652713 | 2024-04-11 20:34:55 | 2024-04-12 05:16:35 | 2024-04-12 05:39:39 | 0:23:04 | 0:17:05 | 0:05:59 | smithi | main | rhel | 8.6 | rados/cephadm/workunits/{0-distro/rhel_8.6_container_tools_3.0 agent/on mon_election/connectivity task/test_cephadm} | 1 | |
Failure Reason:
Command failed (workunit test cephadm/test_cephadm.sh) on smithi184 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=d540ebaca6b131a1dd560e7f69e024b133bbaa42 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_cephadm.sh' |
||||||||||||||
fail | 7652714 | 2024-04-11 20:34:56 | 2024-04-12 05:16:36 | 2024-04-12 05:54:39 | 0:38:03 | 0:27:57 | 0:10:06 | smithi | main | ubuntu | 22.04 | rados/singleton-bluestore/{all/cephtool mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{ubuntu_latest}} | 1 | |
Failure Reason:
Command failed (workunit test cephtool/test.sh) on smithi086 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=d540ebaca6b131a1dd560e7f69e024b133bbaa42 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh' |
||||||||||||||
fail | 7652715 | 2024-04-11 20:34:57 | 2024-04-12 05:17:06 | 2024-04-12 07:12:26 | 1:55:20 | 1:45:51 | 0:09:29 | smithi | main | centos | 9.stream | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-comp-zstd rados tasks/rados_cls_all validater/valgrind} | 2 | |
Failure Reason:
"2024-04-12T05:45:48.818301+0000 mon.a (mon.0) 404 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)" in cluster log |
||||||||||||||
fail | 7652716 | 2024-04-11 20:34:58 | 2024-04-12 05:17:06 | 2024-04-12 06:24:15 | 1:07:09 | 0:54:00 | 0:13:09 | smithi | main | rhel | 8.6 | rados/standalone/{supported-random-distro$/{rhel_8} workloads/mon} | 1 | |
Failure Reason:
Command failed (workunit test mon/osd-erasure-code-profile.sh) on smithi045 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=d540ebaca6b131a1dd560e7f69e024b133bbaa42 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/standalone/mon/osd-erasure-code-profile.sh' |
||||||||||||||
fail | 7652717 | 2024-04-11 20:34:59 | 2024-04-12 05:23:28 | 2024-04-12 06:00:59 | 0:37:31 | 0:27:36 | 0:09:55 | smithi | main | ubuntu | 22.04 | rados/singleton-bluestore/{all/cephtool mon_election/classic msgr-failures/many msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{ubuntu_latest}} | 1 | |
Failure Reason:
Command failed (workunit test cephtool/test.sh) on smithi149 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=d540ebaca6b131a1dd560e7f69e024b133bbaa42 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh' |
||||||||||||||
pass | 7652718 | 2024-04-11 20:35:00 | 2024-04-12 05:23:28 | 2024-04-12 05:59:29 | 0:36:01 | 0:24:57 | 0:11:04 | smithi | main | ubuntu | 20.04 | rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/few objectstore/bluestore-low-osd-mem-target rados recovery-overrides/{more-async-recovery} supported-random-distro$/{ubuntu_20.04} thrashers/mapgap thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} | 2 | |
fail | 7652719 | 2024-04-11 20:35:01 | 2024-04-12 05:23:28 | 2024-04-12 05:46:24 | 0:22:56 | 0:12:42 | 0:10:14 | smithi | main | centos | 8.stream | rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools agent/on mon_election/connectivity task/test_cephadm} | 1 | |
Failure Reason:
Command failed (workunit test cephadm/test_cephadm.sh) on smithi114 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=d540ebaca6b131a1dd560e7f69e024b133bbaa42 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_cephadm.sh' |