User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Pass | Fail | Dead |
---|---|---|---|---|---|---|---|---|---|---|---|
adking | 2023-08-11 12:02:56 | 2023-08-11 15:29:25 | 2023-08-12 03:42:27 | 12:13:02 | orch:cephadm | wip-adk-testing-2023-08-09-2121 | smithi | 7179039 | 3 | 11 | 4 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
fail | 7367208 | 2023-08-11 12:03:02 | 2023-08-11 15:29:25 | 2023-08-11 15:56:34 | 0:27:09 | 0:18:23 | 0:08:46 | smithi | main | centos | 8.stream | orch:cephadm/workunits/{0-distro/rhel_8.6_container_tools_3.0 agent/on mon_election/connectivity task/test_iscsi_container/{centos_8.stream_container_tools test_iscsi_container}} | 1 | |
Failure Reason:
Command failed (workunit test cephadm/test_iscsi_pids_limit.sh) on smithi096 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=71790394ed062cbeead65bd2eeba2f17128323b5 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_iscsi_pids_limit.sh' |
||||||||||||||
dead | 7367209 | 2023-08-11 12:03:03 | 2023-08-11 15:29:26 | 2023-08-12 03:38:50 | 12:09:24 | smithi | main | centos | 8.stream | orch:cephadm/mgr-nfs-upgrade/{0-centos_8.stream_container_tools 1-bootstrap/17.2.0 1-start 2-nfs 3-upgrade-with-workload 4-final} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 7367210 | 2023-08-11 12:03:04 | 2023-08-11 15:29:26 | 2023-08-11 16:05:00 | 0:35:34 | 0:20:00 | 0:15:34 | smithi | main | centos | 8.stream | orch:cephadm/rbd_iscsi/{0-single-container-host base/install cluster/{fixed-3 openstack} workloads/cephadm_iscsi} | 3 | |
Failure Reason:
timeout expired in wait_until_healthy |
||||||||||||||
fail | 7367211 | 2023-08-11 12:03:04 | 2023-08-11 15:29:26 | 2023-08-11 15:57:39 | 0:28:13 | 0:17:54 | 0:10:19 | smithi | main | centos | 8.stream | orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} fail_fs/yes overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn syntax} roles tasks/{0-from/reef 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-inline/yes 4-verify} 2-client 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} | 2 | |
Failure Reason:
Command failed on smithi184 with status 1: 'test -s /home/ubuntu/cephtest/cephadm && test $(stat -c%s /home/ubuntu/cephtest/cephadm) -gt 1000 && chmod +x /home/ubuntu/cephtest/cephadm' |
||||||||||||||
fail | 7367212 | 2023-08-11 12:03:05 | 2023-08-11 15:30:27 | 2023-08-11 16:11:37 | 0:41:10 | 0:30:13 | 0:10:57 | smithi | main | ubuntu | 20.04 | orch:cephadm/thrash/{0-distro/ubuntu_20.04 1-start 2-thrash 3-tasks/radosbench fixed-2 msgr/async-v1only root} | 2 | |
Failure Reason:
timeout expired in wait_until_healthy |
||||||||||||||
fail | 7367213 | 2023-08-11 12:03:06 | 2023-08-11 15:31:27 | 2023-08-11 15:59:18 | 0:27:51 | 0:17:27 | 0:10:24 | smithi | main | ubuntu | 20.04 | orch:cephadm/workunits/{0-distro/ubuntu_20.04 agent/on mon_election/connectivity task/test_nfs} | 1 | |
Failure Reason:
Test failure: test_cluster_info (tasks.cephfs.test_nfs.TestNFS) |
||||||||||||||
fail | 7367214 | 2023-08-11 12:03:07 | 2023-08-11 15:31:28 | 2023-08-11 16:02:53 | 0:31:25 | 0:20:42 | 0:10:43 | smithi | main | centos | 8.stream | orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} fail_fs/no overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn syntax} roles tasks/{0-from/reef 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-inline/no 4-verify} 2-client 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} | 2 | |
Failure Reason:
Command failed on smithi064 with status 1: 'test -s /home/ubuntu/cephtest/cephadm && test $(stat -c%s /home/ubuntu/cephtest/cephadm) -gt 1000 && chmod +x /home/ubuntu/cephtest/cephadm' |
||||||||||||||
dead | 7367215 | 2023-08-11 12:03:08 | 2023-08-11 15:31:48 | 2023-08-12 03:40:56 | 12:09:08 | smithi | main | centos | 8.stream | orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} fail_fs/yes overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn syntax} roles tasks/{0-from/quincy 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-inline/yes 4-verify} 2-client 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 7367216 | 2023-08-11 12:03:09 | 2023-08-11 15:31:49 | 2023-08-11 16:01:41 | 0:29:52 | 0:18:24 | 0:11:28 | smithi | main | centos | 8.stream | orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} fail_fs/no overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn syntax} roles tasks/{0-from/reef 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-inline/no 4-verify} 2-client 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} | 2 | |
Failure Reason:
Command failed on smithi003 with status 1: 'test -s /home/ubuntu/cephtest/cephadm && test $(stat -c%s /home/ubuntu/cephtest/cephadm) -gt 1000 && chmod +x /home/ubuntu/cephtest/cephadm' |
||||||||||||||
fail | 7367217 | 2023-08-11 12:03:10 | 2023-08-11 15:31:59 | 2023-08-11 16:52:52 | 1:20:53 | 1:08:28 | 0:12:25 | smithi | main | ubuntu | 20.04 | orch:cephadm/smoke-roleless/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-services/iscsi 3-final} | 2 | |
Failure Reason:
reached maximum tries (301) after waiting for 300 seconds |
||||||||||||||
fail | 7367218 | 2023-08-11 12:03:11 | 2023-08-11 15:32:49 | 2023-08-11 16:16:41 | 0:43:52 | 0:29:14 | 0:14:38 | smithi | main | centos | 8.stream | orch:cephadm/smoke-roleless/{0-distro/centos_8.stream_container_tools 0-nvme-loop 1-start 2-services/jaeger 3-final} | 2 | |
Failure Reason:
reached maximum tries (301) after waiting for 300 seconds |
||||||||||||||
dead | 7367219 | 2023-08-11 12:03:12 | 2023-08-11 15:33:10 | 2023-08-12 03:42:27 | 12:09:17 | smithi | main | centos | 8.stream | orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} fail_fs/yes overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn syntax} roles tasks/{0-from/quincy 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-inline/yes 4-verify} 2-client 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 7367220 | 2023-08-11 12:03:13 | 2023-08-11 15:33:10 | 2023-08-11 16:31:23 | 0:58:13 | 0:47:51 | 0:10:22 | smithi | main | ubuntu | 20.04 | orch:cephadm/thrash/{0-distro/ubuntu_20.04 1-start 2-thrash 3-tasks/small-objects fixed-2 msgr/async root} | 2 | |
pass | 7367221 | 2023-08-11 12:03:13 | 2023-08-11 15:33:11 | 2023-08-11 16:06:30 | 0:33:19 | 0:25:27 | 0:07:52 | smithi | main | rhel | 8.6 | orch:cephadm/workunits/{0-distro/rhel_8.6_container_tools_rhel8 agent/off mon_election/classic task/test_extra_daemon_features} | 2 | |
fail | 7367222 | 2023-08-11 12:03:14 | 2023-08-11 15:35:21 | 2023-08-11 16:06:14 | 0:30:53 | 0:20:54 | 0:09:59 | smithi | main | centos | 8.stream | orch:cephadm/workunits/{0-distro/ubuntu_20.04 agent/on mon_election/connectivity task/test_iscsi_container/{centos_8.stream_container_tools test_iscsi_container}} | 1 | |
Failure Reason:
Command failed (workunit test cephadm/test_iscsi_pids_limit.sh) on smithi005 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=71790394ed062cbeead65bd2eeba2f17128323b5 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_iscsi_pids_limit.sh' |
||||||||||||||
fail | 7367223 | 2023-08-11 12:03:16 | 2023-08-11 15:35:22 | 2023-08-11 15:45:39 | 0:10:17 | smithi | main | centos | 8.stream | orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} fail_fs/no overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn syntax} roles tasks/{0-from/reef 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-inline/no 4-verify} 2-client 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} | 2 | |||
Failure Reason:
Command failed on smithi104 with status 1: 'sudo yum install -y kernel' |
||||||||||||||
dead | 7367224 | 2023-08-11 12:03:17 | 2023-08-11 15:38:12 | 2023-08-11 15:45:07 | 0:06:55 | smithi | main | centos | 8.stream | orch:cephadm/workunits/{0-distro/centos_8.stream_container_tools_crun agent/on mon_election/connectivity task/test_nfs} | 1 | |||
Failure Reason:
SSH connection to smithi104 was lost: 'sudo grub2-mkconfig -o /boot/grub2/grub.cfg' |
||||||||||||||
pass | 7367225 | 2023-08-11 12:03:18 | 2023-08-11 15:38:13 | 2023-08-11 16:34:35 | 0:56:22 | 0:46:04 | 0:10:18 | smithi | main | ubuntu | 20.04 | orch:cephadm/with-work/{0-distro/ubuntu_20.04 fixed-2 mode/packaged mon_election/classic msgr/async start tasks/rados_api_tests} | 2 |