User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Pass | Fail | Dead |
---|---|---|---|---|---|---|---|---|---|---|---|
adking | 2024-05-01 11:36:43 | 2024-05-01 12:55:49 | 2024-05-02 02:05:47 | 13:09:58 | orch:cephadm | wip-adk-testing-2024-04-30-1949 | smithi | 94d368a | 71 | 22 | 1 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
fail | 7682817 | 2024-05-01 11:36:47 | 2024-05-01 12:55:49 | 2024-05-01 13:41:15 | 0:45:26 | 0:38:29 | 0:06:57 | smithi | main | centos | 9.stream | orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mgr mon osd} fail_fs/no overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn pg_health syntax} roles tasks/{0-from/quincy 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-inline/no 4-verify} 2-client/fuse 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} | 2 | |
Failure Reason:
reached maximum tries (51) after waiting for 300 seconds |
||||||||||||||
pass | 7682818 | 2024-05-01 11:36:48 | 2024-05-01 12:55:49 | 2024-05-01 14:07:39 | 1:11:50 | 0:58:29 | 0:13:21 | smithi | main | ubuntu | 22.04 | orch:cephadm/upgrade/{1-start-distro/1-start-ubuntu_22.04 2-repo_digest/repo_digest 3-upgrade/staggered 4-wait 5-upgrade-ls agent/off mon_election/connectivity} | 2 | |
pass | 7682819 | 2024-05-01 11:36:49 | 2024-05-01 12:57:40 | 2024-05-01 13:16:53 | 0:19:13 | 0:12:53 | 0:06:20 | smithi | main | centos | 9.stream | orch:cephadm/smoke/{0-distro/centos_9.stream_runc 0-nvme-loop agent/off fixed-2 mon_election/classic start} | 2 | |
pass | 7682820 | 2024-05-01 11:36:50 | 2024-05-01 12:57:50 | 2024-05-01 13:27:49 | 0:29:59 | 0:20:20 | 0:09:39 | smithi | main | ubuntu | 22.04 | orch:cephadm/smoke-roleless/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-services/nfs-ingress-rgw-user 3-final} | 2 | |
pass | 7682821 | 2024-05-01 11:36:51 | 2024-05-01 12:58:01 | 2024-05-01 13:39:11 | 0:41:10 | 0:34:42 | 0:06:28 | smithi | main | centos | 9.stream | orch:cephadm/mgr-nfs-upgrade/{0-centos_9.stream 1-bootstrap/17.2.0 1-start 2-nfs 3-upgrade-with-workload 4-final} | 2 | |
fail | 7682822 | 2024-05-01 11:36:52 | 2024-05-01 12:58:01 | 2024-05-01 14:08:03 | 1:10:02 | 0:57:28 | 0:12:34 | smithi | main | ubuntu | 22.04 | orch:cephadm/nfs/{cluster/{1-node} conf/{client mds mgr mon osd} overrides/{ignorelist_health pg_health} supported-random-distros$/{ubuntu_latest} tasks/nfs} | 1 | |
Failure Reason:
"2024-05-01T13:25:21.601197+0000 mon.a (mon.0) 336 : cluster [WRN] Health check failed: no active mgr (MGR_DOWN)" in cluster log |
||||||||||||||
pass | 7682823 | 2024-05-01 11:36:53 | 2024-05-01 13:00:22 | 2024-05-01 13:19:14 | 0:18:52 | 0:12:52 | 0:06:00 | smithi | main | centos | 9.stream | orch:cephadm/no-agent-workunits/{0-distro/centos_9.stream mon_election/classic task/test_orch_cli} | 1 | |
pass | 7682824 | 2024-05-01 11:36:54 | 2024-05-01 13:00:22 | 2024-05-01 13:24:36 | 0:24:14 | 0:13:54 | 0:10:20 | smithi | main | ubuntu | 22.04 | orch:cephadm/orchestrator_cli/{0-random-distro$/{ubuntu_22.04} 2-node-mgr agent/off orchestrator_cli} | 2 | |
pass | 7682825 | 2024-05-01 11:36:55 | 2024-05-01 13:00:42 | 2024-05-01 13:31:16 | 0:30:34 | 0:18:41 | 0:11:53 | smithi | main | ubuntu | 22.04 | orch:cephadm/osds/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-ops/rm-zap-flag} | 2 | |
fail | 7682826 | 2024-05-01 11:36:56 | 2024-05-01 13:03:33 | 2024-05-01 13:43:26 | 0:39:53 | 0:26:01 | 0:13:52 | smithi | main | centos | 9.stream | orch:cephadm/rbd_iscsi/{0-single-container-host base/install cluster/{fixed-3 openstack} conf/{disable-pool-app} workloads/cephadm_iscsi} | 3 | |
Failure Reason:
"2024-05-01T13:23:19.575128+0000 mon.a (mon.0) 204 : cluster [WRN] Health check failed: 1/3 mons down, quorum a,c (MON_DOWN)" in cluster log |
||||||||||||||
pass | 7682827 | 2024-05-01 11:36:57 | 2024-05-01 13:10:25 | 2024-05-01 13:37:54 | 0:27:29 | 0:18:36 | 0:08:53 | smithi | main | ubuntu | 22.04 | orch:cephadm/smb/{0-distro/ubuntu_22.04 tasks/deploy_smb_mgr_domain} | 2 | |
fail | 7682828 | 2024-05-01 11:36:58 | 2024-05-01 13:10:25 | 2024-05-01 13:33:25 | 0:23:00 | 0:15:55 | 0:07:05 | smithi | main | centos | 9.stream | orch:cephadm/smoke-roleless/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-services/nfs-ingress 3-final} | 2 | |
Failure Reason:
"2024-05-01T13:27:10.175319+0000 mon.smithi040 (mon.0) 858 : cluster [WRN] Health check failed: Failed to place 2 daemon(s) ["Failed while placing nfs.foo.0.0.smithi040.cqvtza on smithi040: grace tool failed: rados_pool_create: -1\nCan't connect to cluster: -1\nterminate called after throwing an instance of 'std::bad_variant_access'\n what(): std::get: wrong index for variant\n", "Failed while placing nfs.foo.1.0.smithi146.qxoeto on smithi146: grace tool failed: rados_pool_create: -1\nCan't connect to cluster: -1\nterminate called after throwing an instance of 'std::bad_variant_access'\n what(): std::get: wrong index for variant\n"] (CEPHADM_DAEMON_PLACE_FAIL)" in cluster log |
||||||||||||||
pass | 7682829 | 2024-05-01 11:36:59 | 2024-05-01 13:10:25 | 2024-05-01 13:27:03 | 0:16:38 | 0:11:08 | 0:05:30 | smithi | main | centos | 9.stream | orch:cephadm/smoke-singlehost/{0-random-distro$/{centos_9.stream_runc} 1-start 2-services/basic 3-final} | 1 | |
pass | 7682830 | 2024-05-01 11:37:00 | 2024-05-01 13:10:26 | 2024-05-01 13:27:21 | 0:16:55 | 0:11:01 | 0:05:54 | smithi | main | centos | 9.stream | orch:cephadm/smoke-small/{0-distro/centos_9.stream_runc 0-nvme-loop agent/off fixed-2 mon_election/classic start} | 3 | |
pass | 7682831 | 2024-05-01 11:37:01 | 2024-05-01 13:10:26 | 2024-05-01 13:23:18 | 0:12:52 | 0:06:22 | 0:06:30 | smithi | main | centos | 9.stream | orch:cephadm/workunits/{0-distro/centos_9.stream_runc agent/on mon_election/connectivity task/test_cephadm_repos} | 1 | |
pass | 7682832 | 2024-05-01 11:37:02 | 2024-05-01 13:10:26 | 2024-05-01 13:34:09 | 0:23:43 | 0:15:07 | 0:08:36 | smithi | main | centos | 9.stream | orch:cephadm/smoke-roleless/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-services/nfs-ingress2 3-final} | 2 | |
fail | 7682833 | 2024-05-01 11:37:03 | 2024-05-01 13:10:27 | 2024-05-01 13:40:13 | 0:29:46 | 0:20:13 | 0:09:33 | smithi | main | ubuntu | 22.04 | orch:cephadm/smoke-roleless/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-services/nfs-keepalive-only 3-final} | 2 | |
Failure Reason:
"2024-05-01T13:35:35.906413+0000 mon.smithi012 (mon.0) 835 : cluster [WRN] Health check failed: Failed to place 1 daemon(s) ["Failed while placing nfs.foo.0.0.smithi012.frhodp on smithi012: grace tool failed: rados_pool_create: -1\nCan't connect to cluster: -1\nterminate called after throwing an instance of 'std::bad_variant_access'\n what(): std::get: wrong index for variant\n"] (CEPHADM_DAEMON_PLACE_FAIL)" in cluster log |
||||||||||||||
fail | 7682834 | 2024-05-01 11:37:04 | 2024-05-01 13:10:27 | 2024-05-01 16:55:50 | 3:45:23 | 3:36:12 | 0:09:11 | smithi | main | centos | 9.stream | orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mgr mon osd} fail_fs/yes overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn pg_health syntax} roles tasks/{0-from/reef/{v18.2.1} 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-inline/yes 4-verify} 2-client/kclient 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} | 2 | |
Failure Reason:
Command failed (workunit test suites/fsstress.sh) on smithi191 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && cd -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=94d368ab6b242524ce590f46f7674c3c845000f8 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="1" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.1 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.1 CEPH_MNT=/home/ubuntu/cephtest/mnt.1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.1/qa/workunits/suites/fsstress.sh' |
||||||||||||||
pass | 7682835 | 2024-05-01 11:37:05 | 2024-05-01 13:10:28 | 2024-05-01 13:35:51 | 0:25:23 | 0:12:12 | 0:13:11 | smithi | main | centos | 9.stream | orch:cephadm/osds/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-ops/rm-zap-wait} | 2 | |
pass | 7682836 | 2024-05-01 11:37:06 | 2024-05-01 13:16:19 | 2024-05-01 13:33:16 | 0:16:57 | 0:09:53 | 0:07:04 | smithi | main | centos | 9.stream | orch:cephadm/smb/{0-distro/centos_9.stream tasks/deploy_smb_mgr_res_basic} | 2 | |
pass | 7682837 | 2024-05-01 11:37:07 | 2024-05-01 13:16:19 | 2024-05-01 13:51:54 | 0:35:35 | 0:27:49 | 0:07:46 | smithi | main | centos | 9.stream | orch:cephadm/thrash/{0-distro/centos_9.stream 1-start 2-thrash 3-tasks/small-objects fixed-2 msgr/async-v1only root} | 2 | |
pass | 7682838 | 2024-05-01 11:37:08 | 2024-05-01 13:17:00 | 2024-05-01 14:04:11 | 0:47:11 | 0:33:30 | 0:13:41 | smithi | main | centos | 9.stream | orch:cephadm/with-work/{0-distro/centos_9.stream fixed-2 mode/packaged mon_election/classic msgr/async-v2only start tasks/rados_api_tests} | 2 | |
pass | 7682839 | 2024-05-01 11:37:09 | 2024-05-01 13:25:45 | 2024-05-01 13:53:17 | 0:27:32 | 0:18:28 | 0:09:04 | smithi | main | ubuntu | 22.04 | orch:cephadm/workunits/{0-distro/ubuntu_22.04 agent/off mon_election/classic task/test_extra_daemon_features} | 2 | |
fail | 7682840 | 2024-05-01 11:37:10 | 2024-05-01 13:25:45 | 2024-05-01 13:47:24 | 0:21:39 | 0:13:42 | 0:07:57 | smithi | main | centos | 9.stream | orch:cephadm/smoke-roleless/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-services/nfs 3-final} | 2 | |
Failure Reason:
"2024-05-01T13:42:20.167328+0000 mon.smithi022 (mon.0) 791 : cluster [WRN] Health check failed: Failed to place 1 daemon(s) ["Failed while placing nfs.foo.0.0.smithi022.xsplfa on smithi022: grace tool failed: rados_pool_create: -1\nCan't connect to cluster: -1\nterminate called after throwing an instance of 'std::bad_variant_access'\n what(): std::get: wrong index for variant\n"] (CEPHADM_DAEMON_PLACE_FAIL)" in cluster log |
||||||||||||||
pass | 7682841 | 2024-05-01 11:37:11 | 2024-05-01 13:25:45 | 2024-05-01 13:56:37 | 0:30:52 | 0:23:42 | 0:07:10 | smithi | main | centos | 9.stream | orch:cephadm/no-agent-workunits/{0-distro/centos_9.stream_runc mon_election/connectivity task/test_orch_cli_mon} | 5 | |
pass | 7682842 | 2024-05-01 11:37:12 | 2024-05-01 13:25:46 | 2024-05-01 13:45:01 | 0:19:15 | 0:12:36 | 0:06:39 | smithi | main | centos | 9.stream | orch:cephadm/smoke-roleless/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-services/nfs2 3-final} | 2 | |
fail | 7682843 | 2024-05-01 11:37:13 | 2024-05-01 13:25:46 | 2024-05-01 13:59:29 | 0:33:43 | 0:24:34 | 0:09:09 | smithi | main | ubuntu | 22.04 | orch:cephadm/smoke/{0-distro/ubuntu_22.04 0-nvme-loop agent/on fixed-2 mon_election/connectivity start} | 2 | |
Failure Reason:
"2024-05-01T13:55:41.956827+0000 mon.a (mon.0) 1145 : cluster [WRN] Health check failed: 1 Cephadm Agent(s) are not reporting. Hosts may be offline (CEPHADM_AGENT_DOWN)" in cluster log |
||||||||||||||
fail | 7682844 | 2024-05-01 11:37:14 | 2024-05-01 13:25:47 | 2024-05-01 13:46:48 | 0:21:01 | 0:14:52 | 0:06:09 | smithi | main | centos | 9.stream | orch:cephadm/workunits/{0-distro/centos_9.stream agent/on mon_election/connectivity task/test_host_drain} | 3 | |
Failure Reason:
"2024-05-01T13:42:28.001983+0000 mon.a (mon.0) 493 : cluster [WRN] Health check failed: 1 failed cephadm daemon(s) ['daemon osd.2 on smithi132 is in unknown state'] (CEPHADM_FAILED_DAEMON)" in cluster log |
||||||||||||||
pass | 7682845 | 2024-05-01 11:37:15 | 2024-05-01 13:25:47 | 2024-05-01 13:57:28 | 0:31:41 | 0:19:12 | 0:12:29 | smithi | main | ubuntu | 22.04 | orch:cephadm/smoke-roleless/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-services/nvmeof 3-final} | 2 | |
pass | 7682846 | 2024-05-01 11:37:16 | 2024-05-01 13:27:08 | 2024-05-01 13:46:08 | 0:19:00 | 0:13:11 | 0:05:49 | smithi | main | centos | 9.stream | orch:cephadm/osds/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-ops/rmdir-reactivate} | 2 | |
pass | 7682847 | 2024-05-01 11:37:17 | 2024-05-01 13:27:28 | 2024-05-01 13:45:15 | 0:17:47 | 0:11:09 | 0:06:38 | smithi | main | centos | 9.stream | orch:cephadm/smb/{0-distro/centos_9.stream_runc tasks/deploy_smb_mgr_res_dom} | 2 | |
pass | 7682848 | 2024-05-01 11:37:18 | 2024-05-01 13:27:58 | 2024-05-01 14:09:55 | 0:41:57 | 0:32:29 | 0:09:28 | smithi | main | centos | 9.stream | orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mgr mon osd} fail_fs/no overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn pg_health syntax} roles tasks/{0-from/quincy 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-inline/no 4-verify} 2-client/kclient 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} | 2 | |
pass | 7682849 | 2024-05-01 11:37:19 | 2024-05-01 13:30:39 | 2024-05-01 13:53:45 | 0:23:06 | 0:16:15 | 0:06:51 | smithi | main | centos | 9.stream | orch:cephadm/smoke-roleless/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-services/rgw-ingress 3-final} | 2 | |
fail | 7682850 | 2024-05-01 11:37:20 | 2024-05-01 13:30:49 | 2024-05-01 13:48:26 | 0:17:37 | 0:10:46 | 0:06:51 | smithi | main | centos | 9.stream | orch:cephadm/smoke-small/{0-distro/centos_9.stream_runc 0-nvme-loop agent/on fixed-2 mon_election/connectivity start} | 3 | |
Failure Reason:
"2024-05-01T13:45:23.212331+0000 mon.a (mon.0) 530 : cluster [WRN] Health check failed: 1 failed cephadm daemon(s) ['daemon osd.2 on smithi129 is in unknown state'] (CEPHADM_FAILED_DAEMON)" in cluster log |
||||||||||||||
fail | 7682851 | 2024-05-01 11:37:21 | 2024-05-01 13:31:20 | 2024-05-01 14:28:23 | 0:57:03 | 0:48:03 | 0:09:00 | smithi | main | centos | 9.stream | orch:cephadm/upgrade/{1-start-distro/1-start-centos_9.stream 2-repo_digest/defaut 3-upgrade/staggered 4-wait 5-upgrade-ls agent/off mon_election/classic} | 2 | |
Failure Reason:
"2024-05-01T14:00:49.944354+0000 mon.a (mon.0) 954 : cluster [WRN] Health check failed: 1 failed cephadm daemon(s) ['daemon iscsi.foo.smithi043.ubwslc on smithi043 is in unknown state'] (CEPHADM_FAILED_DAEMON)" in cluster log |
||||||||||||||
pass | 7682852 | 2024-05-01 11:37:22 | 2024-05-01 13:33:21 | 2024-05-01 13:55:56 | 0:22:35 | 0:14:19 | 0:08:16 | smithi | main | centos | 9.stream | orch:cephadm/smoke-roleless/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-services/rgw 3-final} | 2 | |
pass | 7682853 | 2024-05-01 11:37:23 | 2024-05-01 13:34:11 | 2024-05-01 14:17:29 | 0:43:18 | 0:33:37 | 0:09:41 | smithi | main | centos | 9.stream | orch:cephadm/thrash/{0-distro/centos_9.stream_runc 1-start 2-thrash 3-tasks/snaps-few-objects fixed-2 msgr/async-v2only root} | 2 | |
pass | 7682854 | 2024-05-01 11:37:24 | 2024-05-01 13:35:52 | 2024-05-01 14:07:00 | 0:31:08 | 0:21:39 | 0:09:29 | smithi | main | centos | 9.stream | orch:cephadm/with-work/{0-distro/centos_9.stream_runc fixed-2 mode/root mon_election/connectivity msgr/async start tasks/rados_python} | 2 | |
pass | 7682855 | 2024-05-01 11:37:25 | 2024-05-01 13:38:02 | 2024-05-01 13:58:17 | 0:20:15 | 0:13:16 | 0:06:59 | smithi | main | centos | 9.stream | orch:cephadm/workunits/{0-distro/centos_9.stream_runc agent/off mon_election/classic task/test_iscsi_container/{centos_9.stream test_iscsi_container}} | 1 | |
pass | 7682856 | 2024-05-01 11:37:26 | 2024-05-01 13:39:13 | 2024-05-01 14:00:42 | 0:21:29 | 0:12:30 | 0:08:59 | smithi | main | ubuntu | 22.04 | orch:cephadm/no-agent-workunits/{0-distro/ubuntu_22.04 mon_election/classic task/test_adoption} | 1 | |
pass | 7682857 | 2024-05-01 11:37:27 | 2024-05-01 13:39:13 | 2024-05-01 14:00:08 | 0:20:55 | 0:11:47 | 0:09:08 | smithi | main | centos | 9.stream | orch:cephadm/osds/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-ops/deploy-raw} | 2 | |
pass | 7682858 | 2024-05-01 11:37:28 | 2024-05-01 13:59:40 | 721 | smithi | main | centos | 9.stream | orch:cephadm/smb/{0-distro/centos_9.stream_runc tasks/deploy_smb_basic} | 2 | ||||
pass | 7682859 | 2024-05-01 11:37:29 | 2024-05-01 13:41:04 | 2024-05-01 14:11:45 | 0:30:41 | 0:19:05 | 0:11:36 | smithi | main | ubuntu | 22.04 | orch:cephadm/smoke-roleless/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-services/basic 3-final} | 2 | |
pass | 7682860 | 2024-05-01 11:37:30 | 2024-05-01 13:41:04 | 2024-05-01 14:00:08 | 0:19:04 | 0:12:21 | 0:06:43 | smithi | main | centos | 9.stream | orch:cephadm/smoke-roleless/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-services/client-keyring 3-final} | 2 | |
pass | 7682861 | 2024-05-01 11:37:31 | 2024-05-01 13:41:15 | 2024-05-01 14:18:05 | 0:36:50 | 0:24:33 | 0:12:17 | smithi | main | ubuntu | 22.04 | orch:cephadm/workunits/{0-distro/ubuntu_22.04 agent/on mon_election/connectivity task/test_monitoring_stack_basic} | 3 | |
fail | 7682862 | 2024-05-01 11:37:32 | 2024-05-01 13:44:36 | 2024-05-01 14:59:47 | 1:15:11 | 1:07:46 | 0:07:25 | smithi | main | centos | 9.stream | orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mgr mon osd} fail_fs/yes overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn pg_health syntax} roles tasks/{0-from/reef/{v18.2.1} 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-inline/yes 4-verify} 2-client/fuse 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} | 2 | |
Failure Reason:
reached maximum tries (51) after waiting for 300 seconds |
||||||||||||||
pass | 7682863 | 2024-05-01 11:37:33 | 2024-05-01 13:44:36 | 2024-05-01 14:04:26 | 0:19:50 | 0:12:56 | 0:06:54 | smithi | main | centos | 9.stream | orch:cephadm/smoke-roleless/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-services/iscsi 3-final} | 2 | |
pass | 7682864 | 2024-05-01 11:37:34 | 2024-05-01 13:45:07 | 2024-05-01 14:04:25 | 0:19:18 | 0:13:13 | 0:06:05 | smithi | main | centos | 9.stream | orch:cephadm/smoke/{0-distro/centos_9.stream 0-nvme-loop agent/on fixed-2 mon_election/classic start} | 2 | |
pass | 7682865 | 2024-05-01 11:37:35 | 2024-05-01 13:45:17 | 2024-05-01 14:13:49 | 0:28:32 | 0:17:46 | 0:10:46 | smithi | main | ubuntu | 22.04 | orch:cephadm/osds/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-ops/repave-all} | 2 | |
pass | 7682866 | 2024-05-01 11:37:36 | 2024-05-01 13:45:57 | 2024-05-01 14:13:42 | 0:27:45 | 0:17:49 | 0:09:56 | smithi | main | ubuntu | 22.04 | orch:cephadm/smb/{0-distro/ubuntu_22.04 tasks/deploy_smb_domain} | 2 | |
fail | 7682867 | 2024-05-01 11:37:37 | 2024-05-01 13:46:18 | 2024-05-01 20:03:40 | 6:17:22 | 2:39:03 | 3:38:19 | smithi | main | ubuntu | 22.04 | orch:cephadm/smoke-roleless/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-services/jaeger 3-final} | 2 | |
Failure Reason:
Command failed on smithi204 with status 127: "sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:94d368ab6b242524ce590f46f7674c3c845000f8 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid b9943a26-07e0-11ef-bc96-c7b262605968 -- bash -c 'please fail now'" |
||||||||||||||
pass | 7682868 | 2024-05-01 11:37:38 | 2024-05-01 13:46:48 | 2024-05-01 14:44:01 | 0:57:13 | 0:45:43 | 0:11:30 | smithi | main | ubuntu | 22.04 | orch:cephadm/thrash/{0-distro/ubuntu_22.04 1-start 2-thrash 3-tasks/rados_api_tests fixed-2 msgr/async root} | 2 | |
pass | 7682869 | 2024-05-01 11:37:39 | 2024-05-01 13:48:09 | 2024-05-01 14:40:18 | 0:52:09 | 0:38:51 | 0:13:18 | smithi | main | ubuntu | 22.04 | orch:cephadm/with-work/{0-distro/ubuntu_22.04 fixed-2 mode/packaged mon_election/classic msgr/async-v1only start tasks/rotate-keys} | 2 | |
pass | 7682870 | 2024-05-01 11:37:40 | 2024-05-01 13:49:09 | 2024-05-01 14:10:53 | 0:21:44 | 0:12:15 | 0:09:29 | smithi | main | centos | 9.stream | orch:cephadm/workunits/{0-distro/centos_9.stream agent/off mon_election/classic task/test_rgw_multisite} | 3 | |
pass | 7682871 | 2024-05-01 11:37:41 | 2024-05-01 13:52:00 | 2024-05-01 14:13:02 | 0:21:02 | 0:13:42 | 0:07:20 | smithi | main | centos | 9.stream | orch:cephadm/no-agent-workunits/{0-distro/centos_9.stream mon_election/connectivity task/test_cephadm_timeout} | 1 | |
pass | 7682872 | 2024-05-01 11:37:42 | 2024-05-01 13:52:01 | 2024-05-01 14:10:23 | 0:18:22 | 0:11:03 | 0:07:19 | smithi | main | centos | 9.stream | orch:cephadm/orchestrator_cli/{0-random-distro$/{centos_9.stream} 2-node-mgr agent/on orchestrator_cli} | 2 | |
pass | 7682873 | 2024-05-01 11:37:42 | 2024-05-01 13:53:21 | 2024-05-01 14:12:53 | 0:19:32 | 0:13:02 | 0:06:30 | smithi | main | centos | 9.stream | orch:cephadm/smoke-roleless/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-services/mirror 3-final} | 2 | |
pass | 7682874 | 2024-05-01 11:37:43 | 2024-05-01 13:53:51 | 2024-05-01 14:15:49 | 0:21:58 | 0:11:36 | 0:10:22 | smithi | main | centos | 9.stream | orch:cephadm/smoke-singlehost/{0-random-distro$/{centos_9.stream} 1-start 2-services/rgw 3-final} | 1 | |
pass | 7682875 | 2024-05-01 11:37:44 | 2024-05-01 13:56:02 | 2024-05-01 14:13:42 | 0:17:40 | 0:10:56 | 0:06:44 | smithi | main | centos | 9.stream | orch:cephadm/smoke-small/{0-distro/centos_9.stream_runc 0-nvme-loop agent/on fixed-2 mon_election/classic start} | 3 | |
dead | 7682876 | 2024-05-01 11:37:45 | 2024-05-01 13:56:33 | 2024-05-02 02:05:47 | 12:09:14 | smithi | main | centos | 9.stream | orch:cephadm/smoke-roleless/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-services/nfs-haproxy-proto 3-final} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 7682877 | 2024-05-01 11:37:46 | 2024-05-01 13:56:33 | 2024-05-01 14:15:28 | 0:18:55 | 0:12:43 | 0:06:12 | smithi | main | centos | 9.stream | orch:cephadm/osds/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-ops/rm-zap-add} | 2 | |
pass | 7682878 | 2024-05-01 11:37:47 | 2024-05-01 13:56:33 | 2024-05-01 14:13:35 | 0:17:02 | 0:10:31 | 0:06:31 | smithi | main | centos | 9.stream | orch:cephadm/smb/{0-distro/centos_9.stream tasks/deploy_smb_mgr_basic} | 2 | |
pass | 7682879 | 2024-05-01 11:37:48 | 2024-05-01 13:56:34 | 2024-05-01 14:35:26 | 0:38:52 | 0:32:37 | 0:06:15 | smithi | main | centos | 9.stream | orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mgr mon osd} fail_fs/no overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn pg_health syntax} roles tasks/{0-from/quincy 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-inline/no 4-verify} 2-client/kclient 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} | 2 | |
fail | 7682880 | 2024-05-01 11:37:49 | 2024-05-01 13:56:34 | 2024-05-01 14:48:18 | 0:51:44 | 0:41:30 | 0:10:14 | smithi | main | ubuntu | 22.04 | orch:cephadm/upgrade/{1-start-distro/1-start-ubuntu_22.04 2-repo_digest/repo_digest 3-upgrade/simple 4-wait 5-upgrade-ls agent/on mon_election/connectivity} | 2 | |
Failure Reason:
"2024-05-01T14:17:35.567954+0000 mon.a (mon.0) 426 : cluster [WRN] Health check failed: 1 stray daemon(s) not managed by cephadm (CEPHADM_STRAY_DAEMON)" in cluster log |
||||||||||||||
fail | 7682881 | 2024-05-01 11:37:50 | 2024-05-01 13:56:34 | 2024-05-01 14:18:51 | 0:22:17 | 0:15:01 | 0:07:16 | smithi | main | centos | 9.stream | orch:cephadm/workunits/{0-distro/centos_9.stream_runc agent/on mon_election/connectivity task/test_set_mon_crush_locations} | 3 | |
Failure Reason:
"2024-05-01T14:12:54.097512+0000 mon.a (mon.0) 280 : cluster [WRN] Health check failed: 1 failed cephadm daemon(s) ['daemon mon.b on smithi132 is in unknown state'] (CEPHADM_FAILED_DAEMON)" in cluster log |
||||||||||||||
pass | 7682882 | 2024-05-01 11:37:51 | 2024-05-01 13:56:35 | 2024-05-01 14:26:15 | 0:29:40 | 0:20:30 | 0:09:10 | smithi | main | ubuntu | 22.04 | orch:cephadm/smoke-roleless/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-services/nfs-ingress-rgw-bucket 3-final} | 2 | |
pass | 7682883 | 2024-05-01 11:37:52 | 2024-05-01 13:56:35 | 2024-05-01 14:17:59 | 0:21:24 | 0:14:45 | 0:06:39 | smithi | main | centos | 9.stream | orch:cephadm/smoke-roleless/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-services/nfs-ingress-rgw-user 3-final} | 2 | |
pass | 7682884 | 2024-05-01 11:37:53 | 2024-05-01 13:56:56 | 2024-05-01 14:15:36 | 0:18:40 | 0:12:55 | 0:05:45 | smithi | main | centos | 9.stream | orch:cephadm/no-agent-workunits/{0-distro/centos_9.stream_runc mon_election/classic task/test_orch_cli} | 1 | |
pass | 7682885 | 2024-05-01 11:37:54 | 2024-05-01 13:56:56 | 2024-05-01 14:17:34 | 0:20:38 | 0:12:39 | 0:07:59 | smithi | main | centos | 9.stream | orch:cephadm/osds/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-ops/rm-zap-flag} | 2 | |
pass | 7682886 | 2024-05-01 11:37:55 | 2024-05-01 13:58:27 | 2024-05-01 14:16:31 | 0:18:04 | 0:11:03 | 0:07:01 | smithi | main | centos | 9.stream | orch:cephadm/smb/{0-distro/centos_9.stream_runc tasks/deploy_smb_mgr_domain} | 2 | |
pass | 7682887 | 2024-05-01 11:37:56 | 2024-05-01 13:59:47 | 2024-05-01 14:21:05 | 0:21:18 | 0:12:54 | 0:08:24 | smithi | main | centos | 9.stream | orch:cephadm/smoke/{0-distro/centos_9.stream_runc 0-nvme-loop agent/off fixed-2 mon_election/connectivity start} | 2 | |
fail | 7682888 | 2024-05-01 11:37:57 | 2024-05-01 13:59:57 | 2024-05-01 14:23:20 | 0:23:23 | 0:15:25 | 0:07:58 | smithi | main | centos | 9.stream | orch:cephadm/smoke-roleless/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-services/nfs-ingress 3-final} | 2 | |
Failure Reason:
"2024-05-01T14:16:30.858175+0000 mon.smithi040 (mon.0) 861 : cluster [WRN] Health check failed: Failed to place 2 daemon(s) ["Failed while placing nfs.foo.0.0.smithi040.hjxolu on smithi040: grace tool failed: rados_pool_create: -1\nCan't connect to cluster: -1\nterminate called after throwing an instance of 'std::bad_variant_access'\n what(): std::get: wrong index for variant\n", "Failed while placing nfs.foo.1.0.smithi146.mngijq on smithi146: grace tool failed: rados_pool_create: -1\nCan't connect to cluster: -1\nterminate called after throwing an instance of 'std::bad_variant_access'\n what(): std::get: wrong index for variant\n"] (CEPHADM_DAEMON_PLACE_FAIL)" in cluster log |
||||||||||||||
pass | 7682889 | 2024-05-01 11:37:58 | 2024-05-01 14:04:14 | 2024-05-01 15:09:44 | 1:05:30 | 0:57:37 | 0:07:53 | smithi | main | centos | 9.stream | orch:cephadm/thrash/{0-distro/centos_9.stream 1-start 2-thrash 3-tasks/radosbench fixed-2 msgr/async-v1only root} | 2 | |
pass | 7682890 | 2024-05-01 11:37:59 | 2024-05-01 14:04:34 | 2024-05-01 15:01:20 | 0:56:46 | 0:45:18 | 0:11:28 | smithi | main | ubuntu | 22.04 | orch:cephadm/with-work/{0-distro/ubuntu_22.04 fixed-2 mode/root mon_election/connectivity msgr/async-v1only start tasks/rados_api_tests} | 2 | |
pass | 7682891 | 2024-05-01 11:38:00 | 2024-05-01 14:04:34 | 2024-05-01 14:23:48 | 0:19:14 | 0:10:40 | 0:08:34 | smithi | main | centos | 9.stream | orch:cephadm/workunits/{0-distro/centos_9.stream_runc agent/off mon_election/classic task/test_ca_signed_key} | 2 | |
pass | 7682892 | 2024-05-01 11:38:01 | 2024-05-01 14:07:05 | 2024-05-01 14:39:34 | 0:32:29 | 0:22:05 | 0:10:24 | smithi | main | ubuntu | 22.04 | orch:cephadm/smoke-roleless/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-services/nfs-ingress2 3-final} | 2 | |
fail | 7682893 | 2024-05-01 11:38:02 | 2024-05-01 14:07:46 | 2024-05-01 15:10:07 | 1:02:21 | 0:54:16 | 0:08:05 | smithi | main | centos | 9.stream | orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mgr mon osd} fail_fs/yes overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn pg_health syntax} roles tasks/{0-from/reef/{reef} 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-inline/yes 4-verify} 2-client/fuse 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} | 2 | |
Failure Reason:
reached maximum tries (51) after waiting for 300 seconds |
||||||||||||||
fail | 7682894 | 2024-05-01 11:38:03 | 2024-05-01 14:08:46 | 2024-05-01 14:30:04 | 0:21:18 | 0:13:04 | 0:08:14 | smithi | main | centos | 9.stream | orch:cephadm/smoke-roleless/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-services/nfs-keepalive-only 3-final} | 2 | |
Failure Reason:
"2024-05-01T14:25:08.275945+0000 mon.smithi161 (mon.0) 848 : cluster [WRN] Health check failed: Failed to place 1 daemon(s) ["Failed while placing nfs.foo.0.0.smithi161.mcggwx on smithi161: grace tool failed: rados_pool_create: -1\nCan't connect to cluster: -1\nterminate called after throwing an instance of 'std::bad_variant_access'\n what(): std::get: wrong index for variant\n"] (CEPHADM_DAEMON_PLACE_FAIL)" in cluster log |
||||||||||||||
pass | 7682895 | 2024-05-01 11:38:04 | 2024-05-01 14:08:46 | 2024-05-01 14:27:44 | 0:18:58 | 0:10:02 | 0:08:56 | smithi | main | centos | 9.stream | orch:cephadm/smoke-small/{0-distro/centos_9.stream_runc 0-nvme-loop agent/off fixed-2 mon_election/connectivity start} | 3 | |
pass | 7682896 | 2024-05-01 11:38:05 | 2024-05-01 14:10:27 | 2024-05-01 14:40:05 | 0:29:38 | 0:19:32 | 0:10:06 | smithi | main | ubuntu | 22.04 | orch:cephadm/workunits/{0-distro/ubuntu_22.04 agent/on mon_election/connectivity task/test_cephadm} | 1 | |
pass | 7682897 | 2024-05-01 11:38:06 | 2024-05-01 14:10:27 | 2024-05-01 14:38:37 | 0:28:10 | 0:18:19 | 0:09:51 | smithi | main | ubuntu | 22.04 | orch:cephadm/osds/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-ops/rm-zap-wait} | 2 | |
pass | 7682898 | 2024-05-01 11:38:07 | 2024-05-01 14:10:58 | 2024-05-01 14:40:34 | 0:29:36 | 0:17:47 | 0:11:49 | smithi | main | ubuntu | 22.04 | orch:cephadm/smb/{0-distro/ubuntu_22.04 tasks/deploy_smb_mgr_res_basic} | 2 | |
fail | 7682899 | 2024-05-01 11:38:08 | 2024-05-01 14:11:48 | 2024-05-01 14:32:55 | 0:21:07 | 0:13:44 | 0:07:23 | smithi | main | centos | 9.stream | orch:cephadm/smoke-roleless/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-services/nfs 3-final} | 2 | |
Failure Reason:
"2024-05-01T14:27:58.010619+0000 mon.smithi052 (mon.0) 790 : cluster [WRN] Health check failed: Failed to place 1 daemon(s) ["Failed while placing nfs.foo.0.0.smithi052.owduli on smithi052: grace tool failed: rados_pool_create: -1\nCan't connect to cluster: -1\n"] (CEPHADM_DAEMON_PLACE_FAIL)" in cluster log |
||||||||||||||
pass | 7682900 | 2024-05-01 11:38:09 | 2024-05-01 14:12:09 | 2024-05-01 15:00:10 | 0:48:01 | 0:37:48 | 0:10:13 | smithi | main | ubuntu | 22.04 | orch:cephadm/no-agent-workunits/{0-distro/ubuntu_22.04 mon_election/connectivity task/test_orch_cli_mon} | 5 | |
pass | 7682901 | 2024-05-01 11:38:10 | 2024-05-01 14:12:29 | 2024-05-01 14:42:44 | 0:30:15 | 0:19:01 | 0:11:14 | smithi | main | ubuntu | 22.04 | orch:cephadm/smoke-roleless/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-services/nfs2 3-final} | 2 | |
pass | 7682902 | 2024-05-01 11:38:11 | 2024-05-01 14:12:30 | 2024-05-01 14:47:26 | 0:34:56 | 0:27:43 | 0:07:13 | smithi | main | centos | 9.stream | orch:cephadm/thrash/{0-distro/centos_9.stream_runc 1-start 2-thrash 3-tasks/small-objects fixed-2 msgr/async-v2only root} | 2 | |
pass | 7682903 | 2024-05-01 11:38:12 | 2024-05-01 14:12:30 | 2024-05-01 14:41:56 | 0:29:26 | 0:21:41 | 0:07:45 | smithi | main | centos | 9.stream | orch:cephadm/with-work/{0-distro/centos_9.stream fixed-2 mode/packaged mon_election/classic msgr/async-v2only start tasks/rados_python} | 2 | |
pass | 7682904 | 2024-05-01 11:38:12 | 2024-05-01 14:13:00 | 2024-05-01 14:26:22 | 0:13:22 | 0:06:12 | 0:07:10 | smithi | main | centos | 9.stream | orch:cephadm/workunits/{0-distro/centos_9.stream agent/off mon_election/classic task/test_cephadm_repos} | 1 | |
fail | 7682905 | 2024-05-01 11:38:13 | 2024-05-01 14:13:41 | 2024-05-01 14:53:02 | 0:39:21 | 0:31:46 | 0:07:35 | smithi | main | centos | 9.stream | orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mgr mon osd} fail_fs/no overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn pg_health syntax} roles tasks/{0-from/quincy 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-inline/no 4-verify} 2-client/kclient 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} | 2 | |
Failure Reason:
"2024-05-01T14:40:00.000122+0000 mon.smithi069 (mon.0) 300 : cluster [WRN] Health detail: HEALTH_WARN 1 osds down" in cluster log |
||||||||||||||
pass | 7682906 | 2024-05-01 11:38:14 | 2024-05-01 14:13:52 | 2024-05-01 14:32:46 | 0:18:54 | 0:12:34 | 0:06:20 | smithi | main | centos | 9.stream | orch:cephadm/smoke-roleless/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-services/nvmeof 3-final} | 2 | |
fail | 7682907 | 2024-05-01 11:38:15 | 2024-05-01 14:13:52 | 2024-05-01 15:04:58 | 0:51:06 | 0:43:56 | 0:07:10 | smithi | main | centos | 9.stream | orch:cephadm/upgrade/{1-start-distro/1-start-centos_9.stream 2-repo_digest/defaut 3-upgrade/staggered 4-wait 5-upgrade-ls agent/on mon_election/classic} | 2 | |
Failure Reason:
Command failed on smithi037 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 5b38aa62-07c6-11ef-bc94-c7b262605968 -e sha1=94d368ab6b242524ce590f46f7674c3c845000f8 -- bash -c \'ceph orch upgrade check quay.ceph.io/ceph-ci/ceph:$sha1 | jq -e \'"\'"\'.up_to_date | length == 7\'"\'"\'\'' |
||||||||||||||
pass | 7682908 | 2024-05-01 11:38:16 | 2024-05-01 14:13:53 | 2024-05-01 14:33:01 | 0:19:08 | 0:13:25 | 0:05:43 | smithi | main | centos | 9.stream | orch:cephadm/osds/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-ops/rmdir-reactivate} | 2 | |
pass | 7682909 | 2024-05-01 11:38:17 | 2024-05-01 14:13:53 | 2024-05-01 14:32:40 | 0:18:47 | 0:10:47 | 0:08:00 | smithi | main | centos | 9.stream | orch:cephadm/smb/{0-distro/centos_9.stream tasks/deploy_smb_mgr_res_dom} | 2 | |
fail | 7682910 | 2024-05-01 11:38:18 | 2024-05-01 14:15:34 | 2024-05-01 14:51:33 | 0:35:59 | 0:24:29 | 0:11:30 | smithi | main | ubuntu | 22.04 | orch:cephadm/smoke/{0-distro/ubuntu_22.04 0-nvme-loop agent/on fixed-2 mon_election/classic start} | 2 | |
Failure Reason:
"2024-05-01T14:46:09.108860+0000 mon.a (mon.0) 1158 : cluster [WRN] Health check failed: 1 Cephadm Agent(s) are not reporting. Hosts may be offline (CEPHADM_AGENT_DOWN)" in cluster log |