User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Pass | Fail | Dead |
---|---|---|---|---|---|---|---|---|---|---|---|
lflores | 2023-12-15 16:16:51 | 2023-12-15 16:20:43 | 2023-12-16 08:58:09 | 16:37:26 | rados | wip-yuri10-testing-2023-12-12-1229 | smithi | 021ac16 | 63 | 15 | 4 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
pass | 7493247 | 2023-12-15 16:18:17 | 2023-12-15 16:20:43 | 2023-12-15 16:56:53 | 0:36:10 | 0:24:54 | 0:11:16 | smithi | main | ubuntu | 20.04 | rados/cephadm/workunits/{0-distro/ubuntu_20.04 agent/on mon_election/classic task/test_monitoring_stack_basic} | 3 | |
pass | 7493248 | 2023-12-15 16:18:18 | 2023-12-15 16:20:43 | 2023-12-15 17:12:14 | 0:51:31 | 0:39:36 | 0:11:55 | smithi | main | centos | 8.stream | rados/thrash-old-clients/{0-distro$/{centos_8.stream_container_tools} 0-size-min-size-overrides/3-size-2-min-size 1-install/reef backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/on mon_election/connectivity msgr-failures/fastclose rados thrashers/default thrashosds-health workloads/cache-snaps} | 3 | |
pass | 7493249 | 2023-12-15 16:18:19 | 2023-12-15 16:22:54 | 2023-12-15 16:54:56 | 0:32:02 | 0:21:16 | 0:10:46 | smithi | main | centos | 8.stream | rados/cephadm/osds/{0-distro/centos_8.stream_container_tools 0-nvme-loop 1-start 2-ops/rm-zap-flag} | 2 | |
pass | 7493250 | 2023-12-15 16:18:20 | 2023-12-15 16:23:44 | 2023-12-15 18:07:01 | 1:43:17 | 1:33:41 | 0:09:36 | smithi | main | centos | 8.stream | rados/dashboard/{0-single-container-host debug/mgr mon_election/classic random-objectstore$/{bluestore-low-osd-mem-target} tasks/dashboard} | 2 | |
pass | 7493251 | 2023-12-15 16:18:21 | 2023-12-15 16:24:05 | 2023-12-15 16:45:42 | 0:21:37 | 0:11:22 | 0:10:15 | smithi | main | centos | 9.stream | rados/mgr/{clusters/{2-node-mgr} debug/mgr mgr_ttl_cache/disable mon_election/classic random-objectstore$/{bluestore-low-osd-mem-target} supported-random-distro$/{centos_latest} tasks/crash} | 2 | |
dead | 7493252 | 2023-12-15 16:18:21 | 2023-12-15 16:24:25 | 2023-12-16 04:33:57 | 12:09:32 | smithi | main | centos | 9.stream | rados/singleton-nomsgr/{all/admin_socket_output mon_election/classic rados supported-random-distro$/{centos_latest}} | 1 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
dead | 7493253 | 2023-12-15 16:18:22 | 2023-12-15 16:24:26 | 2023-12-16 08:44:40 | 16:20:14 | smithi | main | rhel | 8.6 | rados/upgrade/parallel/{0-random-distro$/{rhel_8.6_container_tools_3.0} 0-start 1-tasks mon_election/classic upgrade-sequence workload/{ec-rados-default rados_api rados_loadgenbig rbd_import_export test_rbd_api test_rbd_python}} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 7493254 | 2023-12-15 16:18:23 | 2023-12-15 16:24:56 | 2023-12-15 16:54:13 | 0:29:17 | 0:17:05 | 0:12:12 | smithi | main | centos | 8.stream | rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools agent/off mon_election/connectivity task/test_orch_cli} | 1 | |
pass | 7493255 | 2023-12-15 16:18:24 | 2023-12-15 16:26:17 | 2023-12-15 16:53:34 | 0:27:17 | 0:16:25 | 0:10:52 | smithi | main | centos | 8.stream | rados/cephadm/smoke/{0-distro/centos_8.stream_container_tools 0-nvme-loop agent/off fixed-2 mon_election/classic start} | 2 | |
pass | 7493256 | 2023-12-15 16:18:25 | 2023-12-15 16:26:27 | 2023-12-15 16:43:35 | 0:17:08 | 0:10:49 | 0:06:19 | smithi | main | rhel | 8.6 | rados/cephadm/smoke-singlehost/{0-random-distro$/{rhel_8.6_container_tools_rhel8} 1-start 2-services/basic 3-final} | 1 | |
pass | 7493257 | 2023-12-15 16:18:25 | 2023-12-15 16:26:27 | 2023-12-15 17:04:05 | 0:37:38 | 0:26:05 | 0:11:33 | smithi | main | centos | 8.stream | rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools_crun agent/on mon_election/classic task/test_orch_cli_mon} | 5 | |
pass | 7493258 | 2023-12-15 16:18:26 | 2023-12-15 16:28:38 | 2023-12-15 16:55:16 | 0:26:38 | 0:15:07 | 0:11:31 | smithi | main | centos | 8.stream | rados/cephadm/osds/{0-distro/centos_8.stream_container_tools_crun 0-nvme-loop 1-start 2-ops/rm-zap-wait} | 2 | |
fail | 7493259 | 2023-12-15 16:18:27 | 2023-12-15 16:30:09 | 2023-12-15 16:58:04 | 0:27:55 | 0:20:59 | 0:06:56 | smithi | main | rhel | 8.6 | rados/cephadm/workunits/{0-distro/rhel_8.6_container_tools_3.0 agent/off mon_election/connectivity task/test_rgw_multisite} | 3 | |
Failure Reason:
Command failed on smithi002 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:021ac1670a21c19759080c6da90baaeb42e7d175 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 2c046ce0-9b6a-11ee-95a5-87774f69a715 -- ceph rgw realm bootstrap -i -' |
||||||||||||||
fail | 7493260 | 2023-12-15 16:18:28 | 2023-12-15 16:30:59 | 2023-12-15 16:49:52 | 0:18:53 | 0:09:19 | 0:09:34 | smithi | main | centos | 9.stream | rados/singleton-nomsgr/{all/ceph-post-file mon_election/classic rados supported-random-distro$/{centos_latest}} | 1 | |
Failure Reason:
Command failed (workunit test post-file.sh) on smithi132 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=021ac1670a21c19759080c6da90baaeb42e7d175 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/post-file.sh' |
||||||||||||||
pass | 7493261 | 2023-12-15 16:18:29 | 2023-12-15 16:31:00 | 2023-12-15 18:05:57 | 1:34:57 | 1:21:38 | 0:13:19 | smithi | main | centos | 8.stream | rados/thrash-old-clients/{0-distro$/{centos_8.stream_container_tools} 0-size-min-size-overrides/2-size-2-min-size 1-install/nautilus-v1only backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/crush-compat mon_election/classic msgr-failures/few rados thrashers/mapgap thrashosds-health workloads/radosbench} | 3 | |
fail | 7493262 | 2023-12-15 16:18:30 | 2023-12-15 16:34:00 | 2023-12-15 17:34:36 | 1:00:36 | 0:50:08 | 0:10:28 | smithi | main | centos | 9.stream | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-bitmap rados tasks/rados_api_tests validater/valgrind} | 2 | |
Failure Reason:
valgrind error: Leak_StillReachable operator new[](unsigned long) allocate allocate |
||||||||||||||
pass | 7493263 | 2023-12-15 16:18:30 | 2023-12-15 16:35:31 | 2023-12-15 17:00:40 | 0:25:09 | 0:16:14 | 0:08:55 | smithi | main | centos | 8.stream | rados/cephadm/smoke/{0-distro/centos_8.stream_container_tools_crun 0-nvme-loop agent/on fixed-2 mon_election/connectivity start} | 2 | |
pass | 7493264 | 2023-12-15 16:18:31 | 2023-12-15 16:35:32 | 2023-12-15 17:03:20 | 0:27:48 | 0:21:01 | 0:06:47 | smithi | main | rhel | 8.6 | rados/cephadm/workunits/{0-distro/rhel_8.6_container_tools_rhel8 agent/on mon_election/classic task/test_set_mon_crush_locations} | 3 | |
pass | 7493265 | 2023-12-15 16:18:32 | 2023-12-15 16:36:02 | 2023-12-15 17:00:10 | 0:24:08 | 0:16:19 | 0:07:49 | smithi | main | rhel | 8.6 | rados/cephadm/osds/{0-distro/rhel_8.6_container_tools_3.0 0-nvme-loop 1-start 2-ops/rmdir-reactivate} | 2 | |
pass | 7493266 | 2023-12-15 16:18:33 | 2023-12-15 16:36:52 | 2023-12-15 16:58:45 | 0:21:53 | 0:11:39 | 0:10:14 | smithi | main | ubuntu | 20.04 | rados/cephadm/workunits/{0-distro/ubuntu_20.04 agent/off mon_election/connectivity task/test_adoption} | 1 | |
pass | 7493267 | 2023-12-15 16:18:34 | 2023-12-15 16:36:53 | 2023-12-15 17:00:29 | 0:23:36 | 0:17:16 | 0:06:20 | smithi | main | rhel | 8.6 | rados/cephadm/smoke/{0-distro/rhel_8.6_container_tools_3.0 0-nvme-loop agent/off fixed-2 mon_election/classic start} | 2 | |
pass | 7493268 | 2023-12-15 16:18:34 | 2023-12-15 16:37:23 | 2023-12-15 17:02:44 | 0:25:21 | 0:15:29 | 0:09:52 | smithi | main | centos | 9.stream | rados/singleton/{all/osd-backfill mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-comp-zstd rados supported-random-distro$/{centos_latest}} | 1 | |
pass | 7493269 | 2023-12-15 16:18:35 | 2023-12-15 16:38:04 | 2023-12-15 17:03:22 | 0:25:18 | 0:15:23 | 0:09:55 | smithi | main | centos | 8.stream | rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools agent/on mon_election/classic task/test_ca_signed_key} | 2 | |
pass | 7493270 | 2023-12-15 16:18:36 | 2023-12-15 16:38:14 | 2023-12-15 16:59:31 | 0:21:17 | 0:15:44 | 0:05:33 | smithi | main | rhel | 8.6 | rados/cephadm/osds/{0-distro/rhel_8.6_container_tools_3.0 0-nvme-loop 1-start 2-ops/repave-all} | 2 | |
pass | 7493271 | 2023-12-15 16:18:37 | 2023-12-15 16:38:14 | 2023-12-15 17:09:43 | 0:31:29 | 0:21:35 | 0:09:54 | smithi | main | centos | 8.stream | rados/thrash-old-clients/{0-distro$/{centos_8.stream_container_tools} 0-size-min-size-overrides/3-size-2-min-size 1-install/nautilus-v2only backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/on mon_election/connectivity msgr-failures/osd-delay rados thrashers/morepggrow thrashosds-health workloads/rbd_cls} | 3 | |
fail | 7493272 | 2023-12-15 16:18:38 | 2023-12-15 16:38:25 | 2023-12-15 17:08:35 | 0:30:10 | 0:18:29 | 0:11:41 | smithi | main | centos | 8.stream | rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools_crun agent/off mon_election/connectivity task/test_cephadm} | 1 | |
Failure Reason:
Command failed (workunit test cephadm/test_cephadm.sh) on smithi178 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=021ac1670a21c19759080c6da90baaeb42e7d175 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_cephadm.sh' |
||||||||||||||
fail | 7493273 | 2023-12-15 16:18:39 | 2023-12-15 16:39:15 | 2023-12-15 17:19:18 | 0:40:03 | 0:29:52 | 0:10:11 | smithi | main | centos | 8.stream | rados/dashboard/{0-single-container-host debug/mgr mon_election/connectivity random-objectstore$/{bluestore-comp-snappy} tasks/e2e} | 2 | |
Failure Reason:
Command failed (workunit test cephadm/test_dashboard_e2e.sh) on smithi110 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=021ac1670a21c19759080c6da90baaeb42e7d175 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_dashboard_e2e.sh' |
||||||||||||||
pass | 7493274 | 2023-12-15 16:18:39 | 2023-12-15 16:39:56 | 2023-12-15 17:03:17 | 0:23:21 | 0:16:02 | 0:07:19 | smithi | main | rhel | 8.6 | rados/cephadm/smoke/{0-distro/rhel_8.6_container_tools_rhel8 0-nvme-loop agent/on fixed-2 mon_election/connectivity start} | 2 | |
pass | 7493275 | 2023-12-15 16:18:40 | 2023-12-15 16:39:56 | 2023-12-15 17:08:17 | 0:28:21 | 0:18:52 | 0:09:29 | smithi | main | ubuntu | 20.04 | rados/cephadm/smoke-singlehost/{0-random-distro$/{ubuntu_20.04} 1-start 2-services/rgw 3-final} | 1 | |
pass | 7493276 | 2023-12-15 16:18:41 | 2023-12-15 16:40:26 | 2023-12-15 16:59:35 | 0:19:09 | 0:12:03 | 0:07:06 | smithi | main | rhel | 8.6 | rados/cephadm/workunits/{0-distro/rhel_8.6_container_tools_3.0 agent/on mon_election/classic task/test_cephadm_repos} | 1 | |
fail | 7493277 | 2023-12-15 16:18:42 | 2023-12-15 16:40:27 | 2023-12-15 17:01:44 | 0:21:17 | 0:11:31 | 0:09:46 | smithi | main | ubuntu | 22.04 | rados/singleton-nomsgr/{all/lazy_omap_stats_output mon_election/connectivity rados supported-random-distro$/{ubuntu_latest}} | 1 | |
Failure Reason:
Command crashed: 'sudo TESTDIR=/home/ubuntu/cephtest bash -c ceph_test_lazy_omap_stats' |
||||||||||||||
pass | 7493278 | 2023-12-15 16:18:43 | 2023-12-15 16:40:37 | 2023-12-15 17:01:54 | 0:21:17 | 0:14:16 | 0:07:01 | smithi | main | rhel | 8.6 | rados/cephadm/osds/{0-distro/rhel_8.6_container_tools_rhel8 0-nvme-loop 1-start 2-ops/rm-zap-add} | 2 | |
pass | 7493279 | 2023-12-15 16:18:43 | 2023-12-15 16:40:47 | 2023-12-15 17:08:00 | 0:27:13 | 0:20:57 | 0:06:16 | smithi | main | rhel | 8.6 | rados/cephadm/workunits/{0-distro/rhel_8.6_container_tools_rhel8 agent/off mon_election/connectivity task/test_extra_daemon_features} | 2 | |
fail | 7493280 | 2023-12-15 16:18:44 | 2023-12-15 16:40:58 | 2023-12-15 17:09:27 | 0:28:29 | 0:19:18 | 0:09:11 | smithi | main | centos | 9.stream | rados/valgrind-leaks/{1-start 2-inject-leak/none centos_latest} | 1 | |
Failure Reason:
valgrind error: Leak_StillReachable operator new[](unsigned long) allocate allocate |
||||||||||||||
fail | 7493281 | 2023-12-15 16:18:45 | 2023-12-15 16:40:58 | 2023-12-15 17:33:17 | 0:52:19 | 0:41:23 | 0:10:56 | smithi | main | centos | 9.stream | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-snappy rados tasks/mon_recovery validater/valgrind} | 2 | |
Failure Reason:
Command failed on smithi173 with status 32: 'sync && sudo umount -f /var/lib/ceph/osd/ceph-5' |
||||||||||||||
pass | 7493282 | 2023-12-15 16:18:46 | 2023-12-15 16:41:49 | 2023-12-15 17:17:05 | 0:35:16 | 0:24:17 | 0:10:59 | smithi | main | ubuntu | 20.04 | rados/cephadm/smoke/{0-distro/ubuntu_20.04 0-nvme-loop agent/off fixed-2 mon_election/classic start} | 2 | |
pass | 7493283 | 2023-12-15 16:18:47 | 2023-12-15 16:43:09 | 2023-12-15 17:30:50 | 0:47:41 | 0:34:50 | 0:12:51 | smithi | main | centos | 8.stream | rados/thrash-old-clients/{0-distro$/{centos_8.stream_container_tools} 0-size-min-size-overrides/2-size-2-min-size 1-install/nautilus backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/crush-compat mon_election/classic msgr-failures/fastclose rados thrashers/none thrashosds-health workloads/snaps-few-objects} | 3 | |
pass | 7493284 | 2023-12-15 16:18:48 | 2023-12-15 16:43:40 | 2023-12-15 17:20:00 | 0:36:20 | 0:23:28 | 0:12:52 | smithi | main | ubuntu | 20.04 | rados/cephadm/workunits/{0-distro/ubuntu_20.04 agent/on mon_election/classic task/test_host_drain} | 3 | |
pass | 7493285 | 2023-12-15 16:18:48 | 2023-12-15 16:45:50 | 2023-12-15 17:16:10 | 0:30:20 | 0:19:10 | 0:11:10 | smithi | main | ubuntu | 20.04 | rados/cephadm/osds/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-ops/rm-zap-flag} | 2 | |
pass | 7493286 | 2023-12-15 16:18:49 | 2023-12-15 16:46:01 | 2023-12-15 17:11:38 | 0:25:37 | 0:16:32 | 0:09:05 | smithi | main | centos | 8.stream | rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools agent/off mon_election/connectivity task/test_iscsi_container/{centos_8.stream_container_tools test_iscsi_container}} | 1 | |
pass | 7493287 | 2023-12-15 16:18:50 | 2023-12-15 16:46:01 | 2023-12-15 17:12:04 | 0:26:03 | 0:16:49 | 0:09:14 | smithi | main | centos | 8.stream | rados/cephadm/smoke/{0-distro/centos_8.stream_container_tools 0-nvme-loop agent/on fixed-2 mon_election/connectivity start} | 2 | |
pass | 7493288 | 2023-12-15 16:18:51 | 2023-12-15 17:19:21 | 1301 | smithi | main | centos | 8.stream | rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools_crun agent/on mon_election/classic task/test_monitoring_stack_basic} | 3 | ||||
pass | 7493289 | 2023-12-15 16:18:52 | 2023-12-15 16:48:12 | 2023-12-15 17:12:14 | 0:24:02 | 0:14:13 | 0:09:49 | smithi | main | centos | 8.stream | rados/cephadm/osds/{0-distro/centos_8.stream_container_tools 0-nvme-loop 1-start 2-ops/rm-zap-wait} | 2 | |
pass | 7493290 | 2023-12-15 16:18:53 | 2023-12-15 16:48:32 | 2023-12-15 17:16:14 | 0:27:42 | 0:19:49 | 0:07:53 | smithi | main | rhel | 8.6 | rados/cephadm/workunits/{0-distro/rhel_8.6_container_tools_3.0 agent/off mon_election/connectivity task/test_orch_cli} | 1 | |
pass | 7493291 | 2023-12-15 16:18:53 | 2023-12-15 16:48:33 | 2023-12-15 17:19:53 | 0:31:20 | 0:20:18 | 0:11:02 | smithi | main | centos | 8.stream | rados/thrash-old-clients/{0-distro$/{centos_8.stream_container_tools} 0-size-min-size-overrides/3-size-2-min-size 1-install/octopus backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/on mon_election/connectivity msgr-failures/few rados thrashers/pggrow thrashosds-health workloads/test_rbd_api} | 3 | |
pass | 7493292 | 2023-12-15 16:18:54 | 2023-12-15 16:50:53 | 2023-12-15 17:20:52 | 0:29:59 | 0:17:28 | 0:12:31 | smithi | main | centos | 8.stream | rados/cephadm/smoke/{0-distro/centos_8.stream_container_tools_crun 0-nvme-loop agent/off fixed-2 mon_election/classic start} | 2 | |
pass | 7493293 | 2023-12-15 16:18:55 | 2023-12-15 16:53:44 | 2023-12-15 18:28:33 | 1:34:49 | 1:23:22 | 0:11:27 | smithi | main | centos | 8.stream | rados/dashboard/{0-single-container-host debug/mgr mon_election/connectivity random-objectstore$/{bluestore-comp-zlib} tasks/dashboard} | 2 | |
dead | 7493294 | 2023-12-15 16:18:56 | 2023-12-15 16:55:05 | 2023-12-16 05:04:50 | 12:09:45 | smithi | main | centos | 9.stream | rados/singleton-nomsgr/{all/admin_socket_output mon_election/connectivity rados supported-random-distro$/{centos_latest}} | 1 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
dead | 7493295 | 2023-12-15 16:18:57 | 2023-12-15 16:55:05 | 2023-12-16 08:58:09 | 16:03:04 | smithi | main | centos | 8.stream | rados/upgrade/parallel/{0-random-distro$/{centos_8.stream_container_tools} 0-start 1-tasks mon_election/connectivity upgrade-sequence workload/{ec-rados-default rados_api rados_loadgenbig rbd_import_export test_rbd_api test_rbd_python}} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 7493296 | 2023-12-15 16:18:58 | 2023-12-15 16:55:25 | 2023-12-15 17:34:10 | 0:38:45 | 0:31:04 | 0:07:41 | smithi | main | rhel | 8.6 | rados/cephadm/workunits/{0-distro/rhel_8.6_container_tools_rhel8 agent/on mon_election/classic task/test_orch_cli_mon} | 5 | |
pass | 7493297 | 2023-12-15 16:18:59 | 2023-12-15 16:56:56 | 2023-12-15 17:14:57 | 0:18:01 | 0:11:33 | 0:06:28 | smithi | main | rhel | 8.6 | rados/cephadm/smoke-singlehost/{0-random-distro$/{rhel_8.6_container_tools_rhel8} 1-start 2-services/basic 3-final} | 1 | |
fail | 7493298 | 2023-12-15 16:18:59 | 2023-12-15 16:56:56 | 2023-12-15 17:29:51 | 0:32:55 | 0:21:30 | 0:11:25 | smithi | main | ubuntu | 20.04 | rados/cephadm/workunits/{0-distro/ubuntu_20.04 agent/off mon_election/connectivity task/test_rgw_multisite} | 3 | |
Failure Reason:
Command failed on smithi002 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:021ac1670a21c19759080c6da90baaeb42e7d175 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 92c8c18a-9b6d-11ee-95a5-87774f69a715 -- ceph rgw realm bootstrap -i -' |
||||||||||||||
pass | 7493299 | 2023-12-15 16:19:00 | 2023-12-15 16:58:07 | 2023-12-15 17:25:19 | 0:27:12 | 0:17:26 | 0:09:46 | smithi | main | centos | 8.stream | rados/cephadm/osds/{0-distro/centos_8.stream_container_tools_crun 0-nvme-loop 1-start 2-ops/rmdir-reactivate} | 2 | |
fail | 7493300 | 2023-12-15 16:19:01 | 2023-12-15 16:58:27 | 2023-12-15 18:57:25 | 1:58:58 | 1:48:33 | 0:10:25 | smithi | main | centos | 9.stream | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-comp-zstd rados tasks/rados_cls_all validater/valgrind} | 2 | |
Failure Reason:
valgrind error: Leak_StillReachable operator new[](unsigned long) allocate allocate |
||||||||||||||
pass | 7493301 | 2023-12-15 16:19:02 | 2023-12-15 16:58:38 | 2023-12-15 17:22:35 | 0:23:57 | 0:17:04 | 0:06:53 | smithi | main | rhel | 8.6 | rados/cephadm/smoke/{0-distro/rhel_8.6_container_tools_3.0 0-nvme-loop agent/on fixed-2 mon_election/connectivity start} | 2 | |
pass | 7493302 | 2023-12-15 16:19:03 | 2023-12-15 16:59:18 | 2023-12-15 17:29:10 | 0:29:52 | 0:18:52 | 0:11:00 | smithi | main | centos | 8.stream | rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools agent/on mon_election/classic task/test_set_mon_crush_locations} | 3 | |
fail | 7493303 | 2023-12-15 16:19:04 | 2023-12-15 16:59:39 | 2023-12-15 17:18:56 | 0:19:17 | 0:09:13 | 0:10:04 | smithi | main | centos | 9.stream | rados/singleton-nomsgr/{all/ceph-post-file mon_election/connectivity rados supported-random-distro$/{centos_latest}} | 1 | |
Failure Reason:
Command failed (workunit test post-file.sh) on smithi067 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=021ac1670a21c19759080c6da90baaeb42e7d175 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/post-file.sh' |
||||||||||||||
pass | 7493304 | 2023-12-15 16:19:04 | 2023-12-15 16:59:39 | 2023-12-15 17:39:38 | 0:39:59 | 0:30:35 | 0:09:24 | smithi | main | centos | 8.stream | rados/thrash-old-clients/{0-distro$/{centos_8.stream_container_tools} 0-size-min-size-overrides/2-size-2-min-size 1-install/pacific backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/crush-compat mon_election/classic msgr-failures/osd-delay rados thrashers/careful thrashosds-health workloads/cache-snaps} | 3 | |
pass | 7493305 | 2023-12-15 16:19:05 | 2023-12-15 17:00:20 | 2023-12-15 17:23:36 | 0:23:16 | 0:14:18 | 0:08:58 | smithi | main | centos | 8.stream | rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools_crun agent/off mon_election/connectivity task/test_adoption} | 1 | |
pass | 7493306 | 2023-12-15 16:19:06 | 2023-12-15 17:00:30 | 2023-12-15 17:27:53 | 0:27:23 | 0:17:05 | 0:10:18 | smithi | main | centos | 8.stream | rados/cephadm/osds/{0-distro/centos_8.stream_container_tools_crun 0-nvme-loop 1-start 2-ops/repave-all} | 2 | |
pass | 7493307 | 2023-12-15 16:19:07 | 2023-12-15 17:00:40 | 2023-12-15 17:23:34 | 0:22:54 | 0:15:40 | 0:07:14 | smithi | main | rhel | 8.6 | rados/cephadm/smoke/{0-distro/rhel_8.6_container_tools_rhel8 0-nvme-loop agent/off fixed-2 mon_election/classic start} | 2 | |
pass | 7493308 | 2023-12-15 16:19:08 | 2023-12-15 17:00:41 | 2023-12-15 17:29:17 | 0:28:36 | 0:21:09 | 0:07:27 | smithi | main | rhel | 8.6 | rados/cephadm/workunits/{0-distro/rhel_8.6_container_tools_3.0 agent/on mon_election/classic task/test_ca_signed_key} | 2 | |
fail | 7493309 | 2023-12-15 16:19:08 | 2023-12-15 17:01:21 | 2023-12-15 17:30:18 | 0:28:57 | 0:22:44 | 0:06:13 | smithi | main | rhel | 8.6 | rados/cephadm/workunits/{0-distro/rhel_8.6_container_tools_rhel8 agent/off mon_election/connectivity task/test_cephadm} | 1 | |
Failure Reason:
Command failed (workunit test cephadm/test_cephadm.sh) on smithi028 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=021ac1670a21c19759080c6da90baaeb42e7d175 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_cephadm.sh' |
||||||||||||||
pass | 7493310 | 2023-12-15 16:19:09 | 2023-12-15 17:01:52 | 2023-12-15 17:40:08 | 0:38:16 | 0:28:38 | 0:09:38 | smithi | main | ubuntu | 20.04 | rados/cephadm/smoke/{0-distro/ubuntu_20.04 0-nvme-loop agent/on fixed-2 mon_election/connectivity start} | 2 | |
pass | 7493311 | 2023-12-15 16:19:10 | 2023-12-15 17:02:02 | 2023-12-15 17:28:46 | 0:26:44 | 0:18:29 | 0:08:15 | smithi | main | rhel | 8.6 | rados/cephadm/osds/{0-distro/rhel_8.6_container_tools_3.0 0-nvme-loop 1-start 2-ops/rm-zap-add} | 2 | |
pass | 7493312 | 2023-12-15 16:19:11 | 2023-12-15 17:03:23 | 2023-12-15 18:44:47 | 1:41:24 | 1:31:41 | 0:09:43 | smithi | main | centos | 8.stream | rados/thrash-old-clients/{0-distro$/{centos_8.stream_container_tools} 0-size-min-size-overrides/3-size-2-min-size 1-install/quincy backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/on mon_election/connectivity msgr-failures/fastclose rados thrashers/default thrashosds-health workloads/radosbench} | 3 | |
pass | 7493313 | 2023-12-15 16:19:12 | 2023-12-15 17:03:23 | 2023-12-15 17:30:32 | 0:27:09 | 0:18:08 | 0:09:01 | smithi | main | centos | 8.stream | rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools agent/off mon_election/connectivity task/test_extra_daemon_features} | 2 | |
fail | 7493314 | 2023-12-15 16:19:13 | 2023-12-15 17:03:23 | 2023-12-15 17:43:33 | 0:40:10 | 0:30:05 | 0:10:05 | smithi | main | centos | 8.stream | rados/dashboard/{0-single-container-host debug/mgr mon_election/classic random-objectstore$/{bluestore-comp-lz4} tasks/e2e} | 2 | |
Failure Reason:
Command failed (workunit test cephadm/test_dashboard_e2e.sh) on smithi155 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=021ac1670a21c19759080c6da90baaeb42e7d175 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_dashboard_e2e.sh' |
||||||||||||||
pass | 7493315 | 2023-12-15 16:19:13 | 2023-12-15 17:04:14 | 2023-12-15 17:30:59 | 0:26:45 | 0:17:09 | 0:09:36 | smithi | main | centos | 8.stream | rados/cephadm/smoke/{0-distro/centos_8.stream_container_tools 0-nvme-loop agent/on fixed-2 mon_election/classic start} | 2 | |
fail | 7493316 | 2023-12-15 16:19:14 | 2023-12-15 17:04:14 | 2023-12-15 18:01:19 | 0:57:05 | 0:41:35 | 0:15:30 | smithi | main | centos | 9.stream | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-low-osd-mem-target rados tasks/rados_api_tests validater/valgrind} | 2 | |
Failure Reason:
valgrind error: Leak_StillReachable operator new[](unsigned long) allocate allocate |
||||||||||||||
pass | 7493317 | 2023-12-15 16:19:15 | 2023-12-15 17:04:15 | 2023-12-15 17:27:58 | 0:23:43 | 0:13:59 | 0:09:44 | smithi | main | centos | 8.stream | rados/cephadm/smoke-singlehost/{0-random-distro$/{centos_8.stream_container_tools} 1-start 2-services/rgw 3-final} | 1 | |
fail | 7493318 | 2023-12-15 16:19:16 | 2023-12-15 17:04:55 | 2023-12-15 17:26:38 | 0:21:43 | 0:11:31 | 0:10:12 | smithi | main | ubuntu | 22.04 | rados/singleton-nomsgr/{all/lazy_omap_stats_output mon_election/classic rados supported-random-distro$/{ubuntu_latest}} | 1 | |
Failure Reason:
Command crashed: 'sudo TESTDIR=/home/ubuntu/cephtest bash -c ceph_test_lazy_omap_stats' |
||||||||||||||
pass | 7493319 | 2023-12-15 16:19:17 | 2023-12-15 17:04:55 | 2023-12-15 17:28:38 | 0:23:43 | 0:14:51 | 0:08:52 | smithi | main | rhel | 8.6 | rados/cephadm/osds/{0-distro/rhel_8.6_container_tools_rhel8 0-nvme-loop 1-start 2-ops/rm-zap-flag} | 2 | |
pass | 7493320 | 2023-12-15 16:19:17 | 2023-12-15 17:07:36 | 2023-12-15 17:37:22 | 0:29:46 | 0:19:06 | 0:10:40 | smithi | main | centos | 8.stream | rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools_crun agent/on mon_election/classic task/test_host_drain} | 3 | |
pass | 7493321 | 2023-12-15 16:19:18 | 2023-12-15 17:08:26 | 2023-12-15 17:37:55 | 0:29:29 | 0:18:20 | 0:11:09 | smithi | main | centos | 8.stream | rados/cephadm/workunits/{0-distro/rhel_8.6_container_tools_3.0 agent/off mon_election/connectivity task/test_iscsi_container/{centos_8.stream_container_tools test_iscsi_container}} | 1 | |
pass | 7493322 | 2023-12-15 16:19:19 | 2023-12-15 17:08:37 | 2023-12-15 17:40:55 | 0:32:18 | 0:22:27 | 0:09:51 | smithi | main | centos | 8.stream | rados/thrash-old-clients/{0-distro$/{centos_8.stream_container_tools} 0-size-min-size-overrides/2-size-2-min-size 1-install/reef backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/crush-compat mon_election/classic msgr-failures/few rados thrashers/mapgap thrashosds-health workloads/rbd_cls} | 3 | |
pass | 7493323 | 2023-12-15 16:19:20 | 2023-12-15 17:09:48 | 2023-12-15 17:35:35 | 0:25:47 | 0:15:53 | 0:09:54 | smithi | main | centos | 8.stream | rados/cephadm/smoke/{0-distro/centos_8.stream_container_tools_crun 0-nvme-loop agent/off fixed-2 mon_election/connectivity start} | 2 | |
pass | 7493324 | 2023-12-15 16:19:21 | 2023-12-15 17:10:18 | 2023-12-15 17:41:15 | 0:30:57 | 0:18:41 | 0:12:16 | smithi | main | ubuntu | 20.04 | rados/cephadm/osds/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-ops/rm-zap-wait} | 2 | |
pass | 7493325 | 2023-12-15 16:19:22 | 2023-12-15 17:10:18 | 2023-12-15 17:43:08 | 0:32:50 | 0:25:16 | 0:07:34 | smithi | main | rhel | 8.6 | rados/cephadm/workunits/{0-distro/rhel_8.6_container_tools_rhel8 agent/on mon_election/classic task/test_monitoring_stack_basic} | 3 | |
pass | 7493326 | 2023-12-15 16:19:23 | 2023-12-15 17:12:09 | 2023-12-15 17:42:09 | 0:30:00 | 0:19:04 | 0:10:56 | smithi | main | ubuntu | 20.04 | rados/cephadm/workunits/{0-distro/ubuntu_20.04 agent/off mon_election/connectivity task/test_orch_cli} | 1 | |
pass | 7493327 | 2023-12-15 16:19:23 | 2023-12-15 17:12:19 | 2023-12-15 17:35:54 | 0:23:35 | 0:17:41 | 0:05:54 | smithi | main | rhel | 8.6 | rados/cephadm/smoke/{0-distro/rhel_8.6_container_tools_3.0 0-nvme-loop agent/on fixed-2 mon_election/classic start} | 2 | |
pass | 7493328 | 2023-12-15 16:19:24 | 2023-12-15 17:12:20 | 2023-12-15 17:35:28 | 0:23:08 | 0:14:42 | 0:08:26 | smithi | main | centos | 8.stream | rados/cephadm/osds/{0-distro/centos_8.stream_container_tools 0-nvme-loop 1-start 2-ops/rmdir-reactivate} | 2 |