User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Pass | Fail | Dead |
---|---|---|---|---|---|---|---|---|---|---|---|
yuriw | 2023-12-07 16:37:24 | 2023-12-07 16:39:22 | 2023-12-08 04:52:10 | 12:12:48 | rados | wip-yuri8-testing-2023-12-06-1425 | smithi | e068ebc | 3 | 35 | 8 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
dead | 7482159 | 2023-12-07 16:38:37 | 2023-12-07 16:39:19 | 2023-12-08 04:51:48 | 12:12:29 | smithi | main | centos | 8.stream | rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools agent/off mon_election/connectivity task/test_rgw_multisite} | 3 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 7482160 | 2023-12-07 16:38:38 | 2023-12-07 16:39:19 | 2023-12-07 17:45:11 | 1:05:52 | 0:54:10 | 0:11:42 | smithi | main | centos | 9.stream | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-zstd rados tasks/mon_recovery validater/valgrind} | 2 | |
Failure Reason:
valgrind error: Leak_StillReachable operator new[](unsigned long) allocate allocate |
||||||||||||||
pass | 7482161 | 2023-12-07 16:38:38 | 2023-12-07 16:39:20 | 2023-12-07 16:58:35 | 0:19:15 | 0:10:44 | 0:08:31 | smithi | main | centos | 9.stream | rados/singleton-nomsgr/{all/export-after-evict mon_election/classic rados supported-random-distro$/{centos_latest}} | 1 | |
fail | 7482162 | 2023-12-07 16:38:39 | 2023-12-07 16:39:20 | 2023-12-07 17:01:29 | 0:22:09 | 0:14:23 | 0:07:46 | smithi | main | rhel | 8.6 | rados/cephadm/workunits/{0-distro/rhel_8.6_container_tools_3.0 agent/off mon_election/connectivity task/test_adoption} | 1 | |
Failure Reason:
Command failed on smithi143 with status 1: 'sudo yum -y install ceph' |
||||||||||||||
dead | 7482163 | 2023-12-07 16:38:40 | 2023-12-07 16:39:20 | 2023-12-07 16:47:14 | 0:07:54 | smithi | main | centos | 8.stream | rados/cephadm/smoke/{0-distro/centos_8.stream_container_tools_crun 0-nvme-loop agent/on fixed-2 mon_election/classic start} | 2 | |||
Failure Reason:
Error reimaging machines: Expected smithi063's OS to be centos 8 but found rhel 8.6 |
||||||||||||||
dead | 7482164 | 2023-12-07 16:38:40 | 2023-12-07 16:39:21 | 2023-12-07 16:49:15 | 0:09:54 | smithi | main | rhel | 8.6 | rados/cephadm/osds/{0-distro/rhel_8.6_container_tools_rhel8 0-nvme-loop 1-start 2-ops/rmdir-reactivate} | 2 | |||
Failure Reason:
SSH connection to smithi063 was lost: 'sudo yum install -y kernel' |
||||||||||||||
fail | 7482165 | 2023-12-07 16:38:41 | 2023-12-07 16:39:21 | 2023-12-07 17:25:25 | 0:46:04 | 0:35:03 | 0:11:01 | smithi | main | centos | 8.stream | rados/dashboard/{0-single-container-host debug/mgr mon_election/classic random-objectstore$/{bluestore-comp-zlib} tasks/e2e} | 2 | |
Failure Reason:
Command failed (workunit test cephadm/test_dashboard_e2e.sh) on smithi043 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=e068ebcf7ecc6503f24666fb6b152034d3fe1067 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_dashboard_e2e.sh' |
||||||||||||||
fail | 7482166 | 2023-12-07 16:38:42 | 2023-12-07 16:39:21 | 2023-12-07 16:58:35 | 0:19:14 | 0:13:32 | 0:05:42 | smithi | main | rhel | 8.6 | rados/cephadm/workunits/{0-distro/rhel_8.6_container_tools_rhel8 agent/on mon_election/classic task/test_ca_signed_key} | 2 | |
Failure Reason:
Command failed on smithi192 with status 1: 'sudo yum -y install ceph' |
||||||||||||||
fail | 7482167 | 2023-12-07 16:38:43 | 2023-12-07 16:39:22 | 2023-12-07 18:05:39 | 1:26:17 | 1:13:07 | 0:13:10 | smithi | main | ubuntu | 20.04 | rados/cephadm/smoke-singlehost/{0-random-distro$/{ubuntu_20.04} 1-start 2-services/rgw 3-final} | 1 | |
Failure Reason:
reached maximum tries (301) after waiting for 300 seconds |
||||||||||||||
fail | 7482168 | 2023-12-07 16:38:43 | 2023-12-07 16:39:22 | 2023-12-07 17:04:18 | 0:24:56 | 0:13:46 | 0:11:10 | smithi | main | centos | 9.stream | rados/singleton-nomsgr/{all/lazy_omap_stats_output mon_election/classic rados supported-random-distro$/{centos_latest}} | 1 | |
Failure Reason:
Command crashed: 'sudo TESTDIR=/home/ubuntu/cephtest bash -c ceph_test_lazy_omap_stats' |
||||||||||||||
fail | 7482169 | 2023-12-07 16:38:44 | 2023-12-07 16:39:22 | 2023-12-07 17:12:28 | 0:33:06 | 0:26:30 | 0:06:36 | smithi | main | rhel | 8.6 | rados/cephadm/smoke/{0-distro/rhel_8.6_container_tools_3.0 0-nvme-loop agent/off fixed-2 mon_election/connectivity start} | 2 | |
Failure Reason:
timeout expired in wait_until_healthy |
||||||||||||||
fail | 7482170 | 2023-12-07 16:38:45 | 2023-12-07 16:39:23 | 2023-12-07 17:17:13 | 0:37:50 | 0:27:40 | 0:10:10 | smithi | main | rhel | 8.6 | rados/cephadm/smoke/{0-distro/rhel_8.6_container_tools_rhel8 0-nvme-loop agent/on fixed-2 mon_election/classic start} | 2 | |
Failure Reason:
timeout expired in wait_until_healthy |
||||||||||||||
pass | 7482171 | 2023-12-07 16:38:46 | 2023-12-07 16:39:23 | 2023-12-07 17:12:21 | 0:32:58 | 0:19:22 | 0:13:36 | smithi | main | centos | 9.stream | rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/many msgr/async-v1only objectstore/bluestore-comp-zstd rados supported-random-distro$/{centos_latest} tasks/libcephsqlite} | 2 | |
fail | 7482172 | 2023-12-07 16:38:46 | 2023-12-07 16:39:23 | 2023-12-07 18:59:45 | 2:20:22 | 2:08:15 | 0:12:07 | smithi | main | centos | 9.stream | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-low-osd-mem-target rados tasks/rados_cls_all validater/valgrind} | 2 | |
Failure Reason:
valgrind error: Leak_StillReachable operator new[](unsigned long) allocate allocate |
||||||||||||||
fail | 7482173 | 2023-12-07 16:38:47 | 2023-12-07 16:39:24 | 2023-12-07 17:06:10 | 0:26:46 | 0:17:25 | 0:09:21 | smithi | main | rhel | 8.6 | rados/cephadm/workunits/{0-distro/rhel_8.6_container_tools_3.0 agent/on mon_election/classic task/test_host_drain} | 3 | |
Failure Reason:
Command failed on smithi077 with status 1: 'sudo yum -y install ceph' |
||||||||||||||
fail | 7482174 | 2023-12-07 16:38:48 | 2023-12-07 16:39:24 | 2023-12-07 17:26:56 | 0:47:32 | 0:33:51 | 0:13:41 | smithi | main | ubuntu | 20.04 | rados/cephadm/smoke/{0-distro/ubuntu_20.04 0-nvme-loop agent/off fixed-2 mon_election/connectivity start} | 2 | |
Failure Reason:
timeout expired in wait_until_healthy |
||||||||||||||
fail | 7482175 | 2023-12-07 16:38:49 | 2023-12-07 16:39:24 | 2023-12-07 17:17:26 | 0:38:02 | 0:22:13 | 0:15:49 | smithi | main | centos | 8.stream | rados/dashboard/{0-single-container-host debug/mgr mon_election/classic random-objectstore$/{bluestore-comp-lz4} tasks/dashboard} | 2 | |
fail | 7482176 | 2023-12-07 16:38:50 | 2023-12-07 16:39:25 | 2023-12-07 17:17:49 | 0:38:24 | 0:23:22 | 0:15:02 | smithi | main | ubuntu | 22.04 | rados/singleton-nomsgr/{all/admin_socket_output mon_election/classic rados supported-random-distro$/{ubuntu_latest}} | 1 | |
dead | 7482177 | 2023-12-07 16:38:50 | 2023-12-07 16:39:25 | 2023-12-08 04:51:06 | 12:11:41 | smithi | main | rhel | 8.6 | rados/upgrade/parallel/{0-random-distro$/{rhel_8.6_container_tools_rhel8} 0-start 1-tasks mon_election/classic upgrade-sequence workload/{ec-rados-default rados_api rados_loadgenbig rbd_import_export test_rbd_api test_rbd_python}} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 7482178 | 2023-12-07 16:38:51 | 2023-12-07 16:39:26 | 2023-12-07 17:16:36 | 0:37:10 | 0:25:21 | 0:11:49 | smithi | main | centos | 8.stream | rados/cephadm/smoke/{0-distro/centos_8.stream_container_tools 0-nvme-loop agent/off fixed-2 mon_election/classic start} | 2 | |
Failure Reason:
timeout expired in wait_until_healthy |
||||||||||||||
fail | 7482179 | 2023-12-07 16:38:52 | 2023-12-07 16:39:26 | 2023-12-07 17:02:00 | 0:22:34 | 0:15:25 | 0:07:09 | smithi | main | rhel | 8.6 | rados/cephadm/workunits/{0-distro/rhel_8.6_container_tools_3.0 agent/off mon_election/connectivity task/test_rgw_multisite} | 3 | |
Failure Reason:
Command failed on smithi145 with status 1: 'sudo yum -y install ceph' |
||||||||||||||
fail | 7482180 | 2023-12-07 16:38:53 | 2023-12-07 16:39:26 | 2023-12-07 17:01:17 | 0:21:51 | 0:11:05 | 0:10:46 | smithi | main | centos | 9.stream | rados/singleton-nomsgr/{all/ceph-post-file mon_election/classic rados supported-random-distro$/{centos_latest}} | 1 | |
Failure Reason:
Command failed (workunit test post-file.sh) on smithi082 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=e068ebcf7ecc6503f24666fb6b152034d3fe1067 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/post-file.sh' |
||||||||||||||
fail | 7482181 | 2023-12-07 16:38:54 | 2023-12-07 16:39:27 | 2023-12-07 18:06:24 | 1:26:57 | 1:12:20 | 0:14:37 | smithi | main | centos | 9.stream | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-bitmap rados tasks/rados_api_tests validater/valgrind} | 2 | |
Failure Reason:
valgrind error: Leak_StillReachable operator new[](unsigned long) allocate allocate |
||||||||||||||
fail | 7482182 | 2023-12-07 16:38:54 | 2023-12-07 16:39:27 | 2023-12-07 17:00:42 | 0:21:15 | 0:14:13 | 0:07:02 | smithi | main | rhel | 8.6 | rados/cephadm/workunits/{0-distro/rhel_8.6_container_tools_rhel8 agent/on mon_election/classic task/test_set_mon_crush_locations} | 3 | |
Failure Reason:
Command failed on smithi026 with status 1: 'sudo yum -y install ceph' |
||||||||||||||
fail | 7482183 | 2023-12-07 16:38:55 | 2023-12-07 16:39:27 | 2023-12-07 17:03:47 | 0:24:20 | 0:13:03 | 0:11:17 | smithi | main | ubuntu | 20.04 | rados/cephadm/workunits/{0-distro/ubuntu_20.04 agent/off mon_election/connectivity task/test_adoption} | 1 | |
Failure Reason:
Command failed (workunit test cephadm/test_adoption.sh) on smithi001 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=e068ebcf7ecc6503f24666fb6b152034d3fe1067 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_adoption.sh' |
||||||||||||||
fail | 7482184 | 2023-12-07 16:38:56 | 2023-12-07 16:39:28 | 2023-12-07 17:17:38 | 0:38:10 | 0:29:22 | 0:08:48 | smithi | main | rhel | 8.6 | rados/cephadm/smoke/{0-distro/rhel_8.6_container_tools_3.0 0-nvme-loop agent/off fixed-2 mon_election/classic start} | 2 | |
Failure Reason:
timeout expired in wait_until_healthy |
||||||||||||||
dead | 7482185 | 2023-12-07 16:38:57 | 2023-12-07 16:39:28 | 2023-12-07 17:01:11 | 0:21:43 | smithi | main | centos | 8.stream | rados/dashboard/{0-single-container-host debug/mgr mon_election/connectivity random-objectstore$/{bluestore-comp-snappy} tasks/e2e} | 2 | |||
Failure Reason:
Error reimaging machines: reached maximum tries (101) after waiting for 600 seconds |
||||||||||||||
dead | 7482186 | 2023-12-07 16:38:58 | 2023-12-07 16:39:28 | 2023-12-07 17:02:47 | 0:23:19 | smithi | main | rhel | 8.6 | rados/cephadm/smoke/{0-distro/rhel_8.6_container_tools_rhel8 0-nvme-loop agent/on fixed-2 mon_election/connectivity start} | 2 | |||
Failure Reason:
Error reimaging machines: reached maximum tries (101) after waiting for 600 seconds |
||||||||||||||
fail | 7482187 | 2023-12-07 16:38:58 | 2023-12-07 16:39:29 | 2023-12-07 17:26:40 | 0:47:11 | 0:33:00 | 0:14:11 | smithi | main | centos | 8.stream | rados/cephadm/smoke-singlehost/{0-random-distro$/{centos_8.stream_container_tools} 1-start 2-services/rgw 3-final} | 1 | |
Failure Reason:
reached maximum tries (301) after waiting for 300 seconds |
||||||||||||||
fail | 7482188 | 2023-12-07 16:38:59 | 2023-12-07 16:39:29 | 2023-12-07 17:03:14 | 0:23:45 | 0:13:35 | 0:10:10 | smithi | main | ubuntu | 22.04 | rados/singleton-nomsgr/{all/lazy_omap_stats_output mon_election/connectivity rados supported-random-distro$/{ubuntu_latest}} | 1 | |
Failure Reason:
Command crashed: 'sudo TESTDIR=/home/ubuntu/cephtest bash -c ceph_test_lazy_omap_stats' |
||||||||||||||
pass | 7482189 | 2023-12-07 16:39:00 | 2023-12-07 16:39:29 | 2023-12-07 17:14:36 | 0:35:07 | 0:23:06 | 0:12:01 | smithi | main | ubuntu | 22.04 | rados/standalone/{supported-random-distro$/{ubuntu_latest} workloads/mgr} | 1 | |
fail | 7482190 | 2023-12-07 16:39:01 | 2023-12-07 16:39:30 | 2023-12-07 17:02:17 | 0:22:47 | 0:14:54 | 0:07:53 | smithi | main | rhel | 8.6 | rados/cephadm/workunits/{0-distro/rhel_8.6_container_tools_rhel8 agent/off mon_election/connectivity task/test_extra_daemon_features} | 2 | |
Failure Reason:
Command failed on smithi071 with status 1: 'sudo yum -y install ceph' |
||||||||||||||
fail | 7482191 | 2023-12-07 16:39:01 | 2023-12-07 16:39:30 | 2023-12-07 17:35:59 | 0:56:29 | 0:43:25 | 0:13:04 | smithi | main | centos | 9.stream | rados/valgrind-leaks/{1-start 2-inject-leak/none centos_latest} | 1 | |
Failure Reason:
valgrind error: Leak_StillReachable operator new[](unsigned long) allocate allocate |
||||||||||||||
fail | 7482192 | 2023-12-07 16:39:02 | 2023-12-07 16:39:30 | 2023-12-07 17:41:58 | 1:02:28 | 0:50:57 | 0:11:31 | smithi | main | centos | 9.stream | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-snappy rados tasks/mon_recovery validater/valgrind} | 2 | |
Failure Reason:
valgrind error: Leak_StillReachable operator new[](unsigned long) allocate allocate |
||||||||||||||
fail | 7482193 | 2023-12-07 16:39:03 | 2023-12-07 16:39:31 | 2023-12-07 17:26:11 | 0:46:40 | 0:34:00 | 0:12:40 | smithi | main | ubuntu | 20.04 | rados/cephadm/smoke/{0-distro/ubuntu_20.04 0-nvme-loop agent/off fixed-2 mon_election/classic start} | 2 | |
Failure Reason:
timeout expired in wait_until_healthy |
||||||||||||||
fail | 7482194 | 2023-12-07 16:39:04 | 2023-12-07 16:39:31 | 2023-12-07 17:17:26 | 0:37:55 | 0:26:13 | 0:11:42 | smithi | main | centos | 8.stream | rados/cephadm/smoke/{0-distro/centos_8.stream_container_tools 0-nvme-loop agent/on fixed-2 mon_election/connectivity start} | 2 | |
Failure Reason:
timeout expired in wait_until_healthy |
||||||||||||||
fail | 7482195 | 2023-12-07 16:39:04 | 2023-12-07 16:39:31 | 2023-12-07 17:06:04 | 0:26:33 | 0:16:56 | 0:09:37 | smithi | main | rhel | 8.6 | rados/cephadm/workunits/{0-distro/rhel_8.6_container_tools_3.0 agent/off mon_election/connectivity task/test_orch_cli} | 1 | |
Failure Reason:
Command failed on smithi089 with status 1: 'sudo yum -y install ceph' |
||||||||||||||
fail | 7482196 | 2023-12-07 16:39:05 | 2023-12-07 16:39:32 | 2023-12-07 17:16:22 | 0:36:50 | 0:25:20 | 0:11:30 | smithi | main | centos | 8.stream | rados/cephadm/smoke/{0-distro/centos_8.stream_container_tools_crun 0-nvme-loop agent/off fixed-2 mon_election/classic start} | 2 | |
Failure Reason:
timeout expired in wait_until_healthy |
||||||||||||||
fail | 7482197 | 2023-12-07 16:39:06 | 2023-12-07 16:39:32 | 2023-12-07 17:17:26 | 0:37:54 | 0:22:31 | 0:15:23 | smithi | main | centos | 8.stream | rados/dashboard/{0-single-container-host debug/mgr mon_election/connectivity random-objectstore$/{bluestore-comp-snappy} tasks/dashboard} | 2 | |
fail | 7482198 | 2023-12-07 16:39:07 | 2023-12-07 16:39:32 | 2023-12-07 17:15:39 | 0:36:07 | 0:21:13 | 0:14:54 | smithi | main | centos | 9.stream | rados/singleton-nomsgr/{all/admin_socket_output mon_election/connectivity rados supported-random-distro$/{centos_latest}} | 1 | |
dead | 7482199 | 2023-12-07 16:39:07 | 2023-12-07 16:39:33 | 2023-12-08 04:51:17 | 12:11:44 | smithi | main | rhel | 8.6 | rados/upgrade/parallel/{0-random-distro$/{rhel_8.6_container_tools_rhel8} 0-start 1-tasks mon_election/connectivity upgrade-sequence workload/{ec-rados-default rados_api rados_loadgenbig rbd_import_export test_rbd_api test_rbd_python}} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 7482200 | 2023-12-07 16:39:08 | 2023-12-07 16:39:33 | 2023-12-07 17:08:36 | 0:29:03 | 0:17:30 | 0:11:33 | smithi | main | rhel | 8.6 | rados/cephadm/workunits/{0-distro/rhel_8.6_container_tools_rhel8 agent/on mon_election/classic task/test_orch_cli_mon} | 5 | |
Failure Reason:
Command failed on smithi134 with status 1: 'sudo yum -y install ceph' |
||||||||||||||
dead | 7482201 | 2023-12-07 16:39:09 | 2023-12-07 16:39:34 | 2023-12-08 04:52:10 | 12:12:36 | smithi | main | ubuntu | 20.04 | rados/cephadm/workunits/{0-distro/ubuntu_20.04 agent/off mon_election/connectivity task/test_rgw_multisite} | 3 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 7482202 | 2023-12-07 16:39:10 | 2023-12-07 16:39:34 | 2023-12-07 18:58:36 | 2:19:02 | 2:05:48 | 0:13:14 | smithi | main | centos | 9.stream | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-comp-zstd rados tasks/rados_cls_all validater/valgrind} | 2 | |
Failure Reason:
valgrind error: Leak_StillReachable operator new[](unsigned long) allocate allocate |
||||||||||||||
fail | 7482203 | 2023-12-07 16:39:10 | 2023-12-07 16:39:34 | 2023-12-07 17:17:06 | 0:37:32 | 0:29:14 | 0:08:18 | smithi | main | rhel | 8.6 | rados/cephadm/smoke/{0-distro/rhel_8.6_container_tools_3.0 0-nvme-loop agent/on fixed-2 mon_election/connectivity start} | 2 | |
Failure Reason:
timeout expired in wait_until_healthy |
||||||||||||||
fail | 7482204 | 2023-12-07 16:39:11 | 2023-12-07 16:39:35 | 2023-12-07 17:08:29 | 0:28:54 | 0:14:50 | 0:14:04 | smithi | main | centos | 9.stream | rados/singleton-nomsgr/{all/ceph-post-file mon_election/connectivity rados supported-random-distro$/{centos_latest}} | 1 | |
Failure Reason:
Command failed (workunit test post-file.sh) on smithi186 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=e068ebcf7ecc6503f24666fb6b152034d3fe1067 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/post-file.sh' |