Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
fail 7699458 2024-05-09 03:09:46 2024-05-09 03:11:34 2024-05-09 04:00:20 0:48:46 0:40:14 0:08:32 smithi main centos 9.stream orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mgr mon osd} fail_fs/no overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn pg_health syntax} roles tasks/{0-from/quincy 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-inline/no 4-verify} 2-client/fuse 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} 2
Failure Reason:

reached maximum tries (51) after waiting for 300 seconds

pass 7699459 2024-05-09 03:09:48 2024-05-09 03:11:34 2024-05-09 04:20:38 1:09:04 0:57:30 0:11:34 smithi main ubuntu 22.04 orch:cephadm/upgrade/{1-start-distro/1-start-ubuntu_22.04 2-repo_digest/repo_digest 3-upgrade/staggered 4-wait 5-upgrade-ls agent/off mon_election/connectivity} 2
pass 7699460 2024-05-09 03:09:49 2024-05-09 03:12:35 2024-05-09 03:35:43 0:23:08 0:13:15 0:09:53 smithi main centos 9.stream orch:cephadm/smoke/{0-distro/centos_9.stream_runc 0-nvme-loop agent/off fixed-2 mon_election/classic start} 2
pass 7699461 2024-05-09 03:09:50 2024-05-09 03:12:35 2024-05-09 03:43:34 0:30:59 0:20:28 0:10:31 smithi main ubuntu 22.04 orch:cephadm/smoke-roleless/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-services/nfs-ingress-rgw-user 3-final} 2
pass 7699462 2024-05-09 03:09:51 2024-05-09 03:13:45 2024-05-09 04:03:09 0:49:24 0:36:28 0:12:56 smithi main centos 9.stream orch:cephadm/mgr-nfs-upgrade/{0-centos_9.stream 1-bootstrap/17.2.0 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
fail 7699463 2024-05-09 03:09:52 2024-05-09 03:15:16 2024-05-09 04:26:26 1:11:10 0:59:20 0:11:50 smithi main ubuntu 22.04 orch:cephadm/nfs/{cluster/{1-node} conf/{client mds mgr mon osd} overrides/{ignorelist_health pg_health} supported-random-distros$/{ubuntu_latest} tasks/nfs} 1
Failure Reason:

"2024-05-09T03:43:27.993292+0000 mon.a (mon.0) 334 : cluster [WRN] Health check failed: no active mgr (MGR_DOWN)" in cluster log

pass 7699464 2024-05-09 03:09:53 2024-05-09 03:16:47 2024-05-09 03:38:44 0:21:57 0:12:51 0:09:06 smithi main centos 9.stream orch:cephadm/no-agent-workunits/{0-distro/centos_9.stream mon_election/classic task/test_orch_cli} 1
pass 7699465 2024-05-09 03:09:54 2024-05-09 03:17:27 2024-05-09 03:40:59 0:23:32 0:14:31 0:09:01 smithi main ubuntu 22.04 orch:cephadm/orchestrator_cli/{0-random-distro$/{ubuntu_22.04} 2-node-mgr agent/off orchestrator_cli} 2
pass 7699466 2024-05-09 03:09:55 2024-05-09 03:17:27 2024-05-09 03:50:02 0:32:35 0:20:18 0:12:17 smithi main ubuntu 22.04 orch:cephadm/osds/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-ops/rm-zap-flag} 2
fail 7699467 2024-05-09 03:09:56 2024-05-09 03:20:38 2024-05-09 03:59:21 0:38:43 0:27:23 0:11:20 smithi main centos 9.stream orch:cephadm/rbd_iscsi/{0-single-container-host base/install cluster/{fixed-3 openstack} conf/{disable-pool-app} workloads/cephadm_iscsi} 3
Failure Reason:

"2024-05-09T03:39:34.230862+0000 mon.a (mon.0) 202 : cluster [WRN] Health check failed: 1/3 mons down, quorum a,c (MON_DOWN)" in cluster log

pass 7699468 2024-05-09 03:09:57 2024-05-09 03:20:38 2024-05-09 03:51:55 0:31:17 0:21:01 0:10:16 smithi main ubuntu 22.04 orch:cephadm/smb/{0-distro/ubuntu_22.04 tasks/deploy_smb_mgr_domain} 2
fail 7699469 2024-05-09 03:09:58 2024-05-09 03:20:39 2024-05-09 03:48:27 0:27:48 0:17:02 0:10:46 smithi main centos 9.stream orch:cephadm/smoke-roleless/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-services/nfs-ingress 3-final} 2
Failure Reason:

"2024-05-09T03:42:52.017010+0000 mon.smithi094 (mon.0) 860 : cluster [WRN] Health check failed: Failed to place 2 daemon(s) ["Failed while placing nfs.foo.0.0.smithi094.tudmau on smithi094: grace tool failed: rados_pool_create: -1\nCan't connect to cluster: -1\nterminate called after throwing an instance of 'std::bad_variant_access'\n what(): std::get: wrong index for variant\n", "Failed while placing nfs.foo.1.0.smithi163.ybtejb on smithi163: grace tool failed: rados_pool_create: -1\nCan't connect to cluster: -1\nterminate called after throwing an instance of 'std::bad_variant_access'\n what(): std::get: wrong index for variant\n"] (CEPHADM_DAEMON_PLACE_FAIL)" in cluster log

pass 7699470 2024-05-09 03:09:59 2024-05-09 03:20:39 2024-05-09 03:40:10 0:19:31 0:10:39 0:08:52 smithi main centos 9.stream orch:cephadm/smoke-singlehost/{0-random-distro$/{centos_9.stream} 1-start 2-services/basic 3-final} 1
pass 7699471 2024-05-09 03:10:00 2024-05-09 03:20:39 2024-05-09 03:43:32 0:22:53 0:11:59 0:10:54 smithi main centos 9.stream orch:cephadm/smoke-small/{0-distro/centos_9.stream_runc 0-nvme-loop agent/off fixed-2 mon_election/classic start} 3
pass 7699472 2024-05-09 03:10:01 2024-05-09 03:20:40 2024-05-09 03:37:10 0:16:30 0:06:36 0:09:54 smithi main centos 9.stream orch:cephadm/workunits/{0-distro/centos_9.stream_runc agent/on mon_election/connectivity task/test_cephadm_repos} 1
pass 7699473 2024-05-09 03:10:02 2024-05-09 03:20:40 2024-05-09 03:48:50 0:28:10 0:17:12 0:10:58 smithi main centos 9.stream orch:cephadm/smoke-roleless/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-services/nfs-ingress2 3-final} 2
fail 7699474 2024-05-09 03:10:03 2024-05-09 03:20:41 2024-05-09 03:53:53 0:33:12 0:21:56 0:11:16 smithi main ubuntu 22.04 orch:cephadm/smoke-roleless/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-services/nfs-keepalive-only 3-final} 2
Failure Reason:

"2024-05-09T03:49:07.318198+0000 mon.smithi123 (mon.0) 838 : cluster [WRN] Health check failed: Failed to place 1 daemon(s) ["Failed while placing nfs.foo.0.0.smithi123.klrglc on smithi123: grace tool failed: rados_pool_create: -1\nCan't connect to cluster: -1\n"] (CEPHADM_DAEMON_PLACE_FAIL)" in cluster log

pass 7699475 2024-05-09 03:10:04 2024-05-09 03:20:41 2024-05-09 04:04:44 0:44:03 0:34:24 0:09:39 smithi main centos 9.stream orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mgr mon osd} fail_fs/yes overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn pg_health syntax} roles tasks/{0-from/reef/{reef} 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-inline/yes 4-verify} 2-client/kclient 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} 2
pass 7699476 2024-05-09 03:10:05 2024-05-09 03:20:41 2024-05-09 03:44:51 0:24:10 0:13:24 0:10:46 smithi main centos 9.stream orch:cephadm/osds/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-ops/rm-zap-wait} 2
pass 7699477 2024-05-09 03:10:06 2024-05-09 03:20:42 2024-05-09 03:40:58 0:20:16 0:10:32 0:09:44 smithi main centos 9.stream orch:cephadm/smb/{0-distro/centos_9.stream tasks/deploy_smb_mgr_res_basic} 2
pass 7699478 2024-05-09 03:10:08 2024-05-09 03:20:42 2024-05-09 04:00:20 0:39:38 0:29:25 0:10:13 smithi main centos 9.stream orch:cephadm/thrash/{0-distro/centos_9.stream 1-start 2-thrash 3-tasks/small-objects fixed-2 msgr/async-v1only root} 2
fail 7699479 2024-05-09 03:10:09 2024-05-09 03:20:42 2024-05-09 04:08:58 0:48:16 0:37:40 0:10:36 smithi main centos 9.stream orch:cephadm/with-work/{0-distro/centos_9.stream fixed-2 mode/packaged mon_election/classic msgr/async-v2only start tasks/rados_api_tests} 2
Failure Reason:

"2024-05-09T03:50:00.000334+0000 mon.a (mon.0) 1402 : cluster [WRN] [WRN] CEPHADM_STRAY_DAEMON: 1 stray daemon(s) ['stray daemon laundry.close-test-pid71236 on host smithi044 not managed by cephadm'] not managed by cephadm" in cluster log

pass 7699480 2024-05-09 03:10:10 2024-05-09 03:20:43 2024-05-09 03:51:51 0:31:08 0:20:35 0:10:33 smithi main ubuntu 22.04 orch:cephadm/workunits/{0-distro/ubuntu_22.04 agent/off mon_election/classic task/test_extra_daemon_features} 2
fail 7699481 2024-05-09 03:10:11 2024-05-09 03:20:43 2024-05-09 03:43:38 0:22:55 0:14:10 0:08:45 smithi main centos 9.stream orch:cephadm/smoke-roleless/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-services/nfs 3-final} 2
Failure Reason:

"2024-05-09T03:40:02.152439+0000 mon.smithi053 (mon.0) 790 : cluster [WRN] Health check failed: Failed to place 1 daemon(s) ["Failed while placing nfs.foo.0.0.smithi053.vgqbtd on smithi053: grace tool failed: rados_pool_create: -1\nCan't connect to cluster: -1\nterminate called after throwing an instance of 'std::bad_variant_access'\n what(): std::get: wrong index for variant\n"] (CEPHADM_DAEMON_PLACE_FAIL)" in cluster log

pass 7699482 2024-05-09 03:10:12 2024-05-09 03:20:43 2024-05-09 03:57:30 0:36:47 0:25:41 0:11:06 smithi main centos 9.stream orch:cephadm/no-agent-workunits/{0-distro/centos_9.stream_runc mon_election/connectivity task/test_orch_cli_mon} 5
pass 7699483 2024-05-09 03:10:13 2024-05-09 03:20:44 2024-05-09 03:44:19 0:23:35 0:12:41 0:10:54 smithi main centos 9.stream orch:cephadm/smoke-roleless/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-services/nfs2 3-final} 2
fail 7699484 2024-05-09 03:10:14 2024-05-09 03:20:44 2024-05-09 03:56:21 0:35:37 0:25:59 0:09:38 smithi main ubuntu 22.04 orch:cephadm/smoke/{0-distro/ubuntu_22.04 0-nvme-loop agent/on fixed-2 mon_election/connectivity start} 2
Failure Reason:

"2024-05-09T03:51:39.366758+0000 mon.a (mon.0) 1128 : cluster [WRN] Health check failed: 1 Cephadm Agent(s) are not reporting. Hosts may be offline (CEPHADM_AGENT_DOWN)" in cluster log

fail 7699485 2024-05-09 03:10:15 2024-05-09 03:20:45 2024-05-09 03:49:26 0:28:41 0:18:04 0:10:37 smithi main centos 9.stream orch:cephadm/workunits/{0-distro/centos_9.stream agent/on mon_election/connectivity task/test_host_drain} 3
Failure Reason:

"2024-05-09T03:45:58.218564+0000 mon.a (mon.0) 605 : cluster [WRN] Health check failed: 1 failed cephadm daemon(s) ['daemon osd.3 on smithi148 is in unknown state'] (CEPHADM_FAILED_DAEMON)" in cluster log

pass 7699486 2024-05-09 03:10:16 2024-05-09 03:20:45 2024-05-09 03:55:04 0:34:19 0:21:55 0:12:24 smithi main ubuntu 22.04 orch:cephadm/smoke-roleless/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-services/nvmeof 3-final} 2
pass 7699487 2024-05-09 03:10:17 2024-05-09 03:20:45 2024-05-09 03:47:16 0:26:31 0:14:55 0:11:36 smithi main centos 9.stream orch:cephadm/osds/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-ops/rmdir-reactivate} 2
pass 7699488 2024-05-09 03:10:18 2024-05-09 03:20:46 2024-05-09 03:47:03 0:26:17 0:12:47 0:13:30 smithi main centos 9.stream orch:cephadm/smb/{0-distro/centos_9.stream_runc tasks/deploy_smb_mgr_res_dom} 2
fail 7699489 2024-05-09 03:10:19 2024-05-09 03:20:46 2024-05-09 04:09:28 0:48:42 0:35:30 0:13:12 smithi main centos 9.stream orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mgr mon osd} fail_fs/no overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn pg_health syntax} roles tasks/{0-from/quincy 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-inline/no 4-verify} 2-client/kclient 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} 2
Failure Reason:

"2024-05-09T04:00:00.000168+0000 mon.smithi046 (mon.0) 632 : cluster [WRN] Health detail: HEALTH_WARN 1 filesystem is degraded" in cluster log

pass 7699490 2024-05-09 03:10:20 2024-05-09 03:23:47 2024-05-09 03:53:23 0:29:36 0:17:58 0:11:38 smithi main centos 9.stream orch:cephadm/smoke-roleless/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-services/rgw-ingress 3-final} 2
fail 7699491 2024-05-09 03:10:21 2024-05-09 03:25:37 2024-05-09 03:48:27 0:22:50 0:12:01 0:10:49 smithi main centos 9.stream orch:cephadm/smoke-small/{0-distro/centos_9.stream_runc 0-nvme-loop agent/on fixed-2 mon_election/connectivity start} 3
Failure Reason:

"2024-05-09T03:45:57.554871+0000 mon.a (mon.0) 496 : cluster [WRN] Health check failed: 1 failed cephadm daemon(s) ['daemon osd.2 on smithi150 is in unknown state'] (CEPHADM_FAILED_DAEMON)" in cluster log

fail 7699492 2024-05-09 03:10:22 2024-05-09 03:27:18 2024-05-09 04:28:40 1:01:22 0:48:04 0:13:18 smithi main centos 9.stream orch:cephadm/upgrade/{1-start-distro/1-start-centos_9.stream 2-repo_digest/defaut 3-upgrade/staggered 4-wait 5-upgrade-ls agent/off mon_election/classic} 2
Failure Reason:

"2024-05-09T04:00:19.068007+0000 mon.a (mon.0) 928 : cluster [WRN] Health check failed: 1 failed cephadm daemon(s) ['daemon iscsi.foo.smithi078.vnplzc on smithi078 is in unknown state'] (CEPHADM_FAILED_DAEMON)" in cluster log

pass 7699493 2024-05-09 03:10:23 2024-05-09 03:27:28 2024-05-09 03:54:41 0:27:13 0:14:31 0:12:42 smithi main centos 9.stream orch:cephadm/smoke-roleless/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-services/rgw 3-final} 2
pass 7699494 2024-05-09 03:10:24 2024-05-09 03:29:39 2024-05-09 04:19:18 0:49:39 0:33:45 0:15:54 smithi main centos 9.stream orch:cephadm/thrash/{0-distro/centos_9.stream_runc 1-start 2-thrash 3-tasks/snaps-few-objects fixed-2 msgr/async-v2only root} 2
pass 7699495 2024-05-09 03:10:26 2024-05-09 03:35:50 2024-05-09 04:07:01 0:31:11 0:21:48 0:09:23 smithi main centos 9.stream orch:cephadm/with-work/{0-distro/centos_9.stream_runc fixed-2 mode/root mon_election/connectivity msgr/async start tasks/rados_python} 2
pass 7699496 2024-05-09 03:10:27 2024-05-09 03:36:11 2024-05-09 03:59:20 0:23:09 0:13:32 0:09:37 smithi main centos 9.stream orch:cephadm/workunits/{0-distro/centos_9.stream_runc agent/off mon_election/classic task/test_iscsi_container/{centos_9.stream test_iscsi_container}} 1
pass 7699497 2024-05-09 03:10:28 2024-05-09 03:36:11 2024-05-09 03:57:41 0:21:30 0:12:39 0:08:51 smithi main ubuntu 22.04 orch:cephadm/no-agent-workunits/{0-distro/ubuntu_22.04 mon_election/classic task/test_adoption} 1
pass 7699498 2024-05-09 03:10:29 2024-05-09 03:36:11 2024-05-09 03:57:14 0:21:03 0:12:08 0:08:55 smithi main centos 9.stream orch:cephadm/osds/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-ops/deploy-raw} 2
pass 7699499 2024-05-09 03:10:30 2024-05-09 03:36:12 2024-05-09 03:56:57 0:20:45 0:12:31 0:08:14 smithi main centos 9.stream orch:cephadm/smb/{0-distro/centos_9.stream_runc tasks/deploy_smb_basic} 2
pass 7699500 2024-05-09 03:10:31 2024-05-09 03:36:12 2024-05-09 04:04:35 0:28:23 0:18:02 0:10:21 smithi main ubuntu 22.04 orch:cephadm/smoke-roleless/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-services/basic 3-final} 2
pass 7699501 2024-05-09 03:10:32 2024-05-09 03:36:23 2024-05-09 04:00:49 0:24:26 0:12:34 0:11:52 smithi main centos 9.stream orch:cephadm/smoke-roleless/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-services/client-keyring 3-final} 2
pass 7699502 2024-05-09 03:10:33 2024-05-09 03:37:13 2024-05-09 04:16:57 0:39:44 0:24:43 0:15:01 smithi main ubuntu 22.04 orch:cephadm/workunits/{0-distro/ubuntu_22.04 agent/on mon_election/connectivity task/test_monitoring_stack_basic} 3
fail 7699503 2024-05-09 03:10:34 2024-05-09 03:41:04 2024-05-09 04:46:10 1:05:06 0:54:21 0:10:45 smithi main centos 9.stream orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mgr mon osd} fail_fs/yes overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn pg_health syntax} roles tasks/{0-from/reef/{reef} 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-inline/yes 4-verify} 2-client/fuse 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} 2
Failure Reason:

reached maximum tries (51) after waiting for 300 seconds

pass 7699504 2024-05-09 03:10:35 2024-05-09 03:41:04 2024-05-09 04:07:02 0:25:58 0:12:50 0:13:08 smithi main centos 9.stream orch:cephadm/smoke-roleless/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-services/iscsi 3-final} 2
fail 7699505 2024-05-09 03:10:36 2024-05-09 03:43:35 2024-05-09 04:06:44 0:23:09 0:13:48 0:09:21 smithi main centos 9.stream orch:cephadm/smoke/{0-distro/centos_9.stream 0-nvme-loop agent/on fixed-2 mon_election/classic start} 2
Failure Reason:

"2024-05-09T04:01:40.068563+0000 mon.a (mon.0) 696 : cluster [WRN] Health check failed: 1 failed cephadm daemon(s) ['daemon osd.6 on smithi156 is in unknown state'] (CEPHADM_FAILED_DAEMON)" in cluster log

pass 7699506 2024-05-09 03:10:37 2024-05-09 03:43:35 2024-05-09 04:11:29 0:27:54 0:18:09 0:09:45 smithi main ubuntu 22.04 orch:cephadm/osds/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-ops/repave-all} 2
pass 7699507 2024-05-09 03:10:38 2024-05-09 03:43:36 2024-05-09 04:12:05 0:28:29 0:17:55 0:10:34 smithi main ubuntu 22.04 orch:cephadm/smb/{0-distro/ubuntu_22.04 tasks/deploy_smb_domain} 2
fail 7699508 2024-05-09 03:10:39 2024-05-09 03:44:26 2024-05-09 04:15:22 0:30:56 0:20:12 0:10:44 smithi main ubuntu 22.04 orch:cephadm/smoke-roleless/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-services/jaeger 3-final} 2
Failure Reason:

"2024-05-09T04:10:48.683702+0000 mon.smithi007 (mon.0) 789 : cluster [WRN] Health check failed: 1 failed cephadm daemon(s) ['daemon jaeger-agent.smithi007 on smithi007 is in unknown state'] (CEPHADM_FAILED_DAEMON)" in cluster log

fail 7699509 2024-05-09 03:10:40 2024-05-09 03:44:57 2024-05-09 04:40:41 0:55:44 0:45:24 0:10:20 smithi main ubuntu 22.04 orch:cephadm/thrash/{0-distro/ubuntu_22.04 1-start 2-thrash 3-tasks/rados_api_tests fixed-2 msgr/async root} 2
Failure Reason:

"2024-05-09T04:20:00.030601+0000 mon.a (mon.0) 1372 : cluster [ERR] [WRN] CEPHADM_STRAY_DAEMON: 2 stray daemon(s) ['stray daemon laundry.pid70045 on host smithi122 not managed by cephadm', 'stray daemon laundry.pid70107 on host smithi122 not managed by cephadm'] not managed by cephadm" in cluster log

pass 7699510 2024-05-09 03:10:41 2024-05-09 03:44:57 2024-05-09 04:35:17 0:50:20 0:39:42 0:10:38 smithi main ubuntu 22.04 orch:cephadm/with-work/{0-distro/ubuntu_22.04 fixed-2 mode/packaged mon_election/classic msgr/async-v1only start tasks/rotate-keys} 2
pass 7699511 2024-05-09 03:10:42 2024-05-09 03:45:27 2024-05-09 04:10:35 0:25:08 0:13:17 0:11:51 smithi main centos 9.stream orch:cephadm/workunits/{0-distro/centos_9.stream agent/off mon_election/classic task/test_rgw_multisite} 3
pass 7699512 2024-05-09 03:10:43 2024-05-09 03:47:08 2024-05-09 04:10:26 0:23:18 0:13:49 0:09:29 smithi main centos 9.stream orch:cephadm/no-agent-workunits/{0-distro/centos_9.stream mon_election/connectivity task/test_cephadm_timeout} 1
pass 7699513 2024-05-09 03:10:44 2024-05-09 03:47:18 2024-05-09 04:10:50 0:23:32 0:12:18 0:11:14 smithi main centos 9.stream orch:cephadm/orchestrator_cli/{0-random-distro$/{centos_9.stream} 2-node-mgr agent/on orchestrator_cli} 2
pass 7699514 2024-05-09 03:10:45 2024-05-09 03:48:59 2024-05-09 04:14:29 0:25:30 0:13:49 0:11:41 smithi main centos 9.stream orch:cephadm/smoke-roleless/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-services/mirror 3-final} 2
pass 7699515 2024-05-09 03:10:46 2024-05-09 03:49:10 2024-05-09 04:10:23 0:21:13 0:11:50 0:09:23 smithi main centos 9.stream orch:cephadm/smoke-singlehost/{0-random-distro$/{centos_9.stream} 1-start 2-services/rgw 3-final} 1
fail 7699516 2024-05-09 03:10:47 2024-05-09 03:49:10 2024-05-09 04:13:11 0:24:01 0:11:25 0:12:36 smithi main centos 9.stream orch:cephadm/smoke-small/{0-distro/centos_9.stream_runc 0-nvme-loop agent/on fixed-2 mon_election/classic start} 3
Failure Reason:

"2024-05-09T04:10:01.929285+0000 mon.a (mon.0) 521 : cluster [WRN] Health check failed: 1 failed cephadm daemon(s) ['daemon osd.2 on smithi087 is in unknown state'] (CEPHADM_FAILED_DAEMON)" in cluster log

pass 7699517 2024-05-09 03:10:48 2024-05-09 03:50:11 2024-05-09 04:16:36 0:26:25 0:15:51 0:10:34 smithi main centos 9.stream orch:cephadm/smoke-roleless/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-services/nfs-haproxy-proto 3-final} 2
pass 7699518 2024-05-09 03:10:49 2024-05-09 03:51:41 2024-05-09 04:15:20 0:23:39 0:13:26 0:10:13 smithi main centos 9.stream orch:cephadm/osds/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-ops/rm-zap-add} 2
pass 7699519 2024-05-09 03:10:50 2024-05-09 03:51:42 2024-05-09 04:13:06 0:21:24 0:11:11 0:10:13 smithi main centos 9.stream orch:cephadm/smb/{0-distro/centos_9.stream tasks/deploy_smb_mgr_basic} 2
pass 7699520 2024-05-09 03:10:52 2024-05-09 03:51:42 2024-05-09 04:34:44 0:43:02 0:33:20 0:09:42 smithi main centos 9.stream orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mgr mon osd} fail_fs/no overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn pg_health syntax} roles tasks/{0-from/quincy 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-inline/no 4-verify} 2-client/kclient 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} 2
fail 7699521 2024-05-09 03:10:53 2024-05-09 03:51:42 2024-05-09 04:43:24 0:51:42 0:42:07 0:09:35 smithi main ubuntu 22.04 orch:cephadm/upgrade/{1-start-distro/1-start-ubuntu_22.04 2-repo_digest/repo_digest 3-upgrade/simple 4-wait 5-upgrade-ls agent/on mon_election/connectivity} 2
Failure Reason:

"2024-05-09T04:18:08.042461+0000 mon.a (mon.0) 874 : cluster [WRN] Health check failed: 1 failed cephadm daemon(s) (CEPHADM_FAILED_DAEMON)" in cluster log

pass 7699522 2024-05-09 03:10:54 2024-05-09 03:51:43 2024-05-09 04:16:43 0:25:00 0:15:30 0:09:30 smithi main centos 9.stream orch:cephadm/workunits/{0-distro/centos_9.stream_runc agent/on mon_election/connectivity task/test_set_mon_crush_locations} 3
pass 7699523 2024-05-09 03:10:55 2024-05-09 03:51:43 2024-05-09 04:21:11 0:29:28 0:20:40 0:08:48 smithi main ubuntu 22.04 orch:cephadm/smoke-roleless/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-services/nfs-ingress-rgw-bucket 3-final} 2
pass 7699524 2024-05-09 03:10:56 2024-05-09 03:51:44 2024-05-09 04:14:54 0:23:10 0:14:18 0:08:52 smithi main centos 9.stream orch:cephadm/smoke-roleless/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-services/nfs-ingress-rgw-user 3-final} 2
pass 7699525 2024-05-09 03:10:57 2024-05-09 03:51:44 2024-05-09 04:14:44 0:23:00 0:13:29 0:09:31 smithi main centos 9.stream orch:cephadm/no-agent-workunits/{0-distro/centos_9.stream_runc mon_election/classic task/test_orch_cli} 1
pass 7699526 2024-05-09 03:10:58 2024-05-09 03:51:44 2024-05-09 04:14:59 0:23:15 0:13:02 0:10:13 smithi main centos 9.stream orch:cephadm/osds/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-ops/rm-zap-flag} 2
pass 7699527 2024-05-09 03:10:59 2024-05-09 03:51:55 2024-05-09 04:13:19 0:21:24 0:11:35 0:09:49 smithi main centos 9.stream orch:cephadm/smb/{0-distro/centos_9.stream_runc tasks/deploy_smb_mgr_domain} 2
pass 7699528 2024-05-09 03:11:00 2024-05-09 03:52:05 2024-05-09 04:17:13 0:25:08 0:13:23 0:11:45 smithi main centos 9.stream orch:cephadm/smoke/{0-distro/centos_9.stream_runc 0-nvme-loop agent/off fixed-2 mon_election/connectivity start} 2
fail 7699529 2024-05-09 03:11:01 2024-05-09 03:53:26 2024-05-09 04:19:46 0:26:20 0:15:59 0:10:21 smithi main centos 9.stream orch:cephadm/smoke-roleless/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-services/nfs-ingress 3-final} 2
Failure Reason:

"2024-05-09T04:14:34.608754+0000 mon.smithi040 (mon.0) 863 : cluster [WRN] Health check failed: Failed to place 2 daemon(s) ["Failed while placing nfs.foo.0.0.smithi040.dhsrki on smithi040: grace tool failed: rados_pool_create: -1\nCan't connect to cluster: -1\nterminate called after throwing an instance of 'std::bad_variant_access'\n what(): std::get: wrong index for variant\n", "Failed while placing nfs.foo.1.0.smithi119.uvxjrj on smithi119: grace tool failed: rados_pool_create: -1\nCan't connect to cluster: -1\nterminate called after throwing an instance of 'std::bad_variant_access'\n what(): std::get: wrong index for variant\n"] (CEPHADM_DAEMON_PLACE_FAIL)" in cluster log

pass 7699530 2024-05-09 03:11:03 2024-05-09 03:54:46 2024-05-09 05:08:32 1:13:46 1:03:44 0:10:02 smithi main centos 9.stream orch:cephadm/thrash/{0-distro/centos_9.stream 1-start 2-thrash 3-tasks/radosbench fixed-2 msgr/async-v1only root} 2
pass 7699531 2024-05-09 03:11:04 2024-05-09 03:55:07 2024-05-09 04:52:42 0:57:35 0:45:46 0:11:49 smithi main ubuntu 22.04 orch:cephadm/with-work/{0-distro/ubuntu_22.04 fixed-2 mode/root mon_election/connectivity msgr/async-v1only start tasks/rados_api_tests} 2
pass 7699532 2024-05-09 03:11:05 2024-05-09 03:57:07 2024-05-09 04:18:26 0:21:19 0:11:28 0:09:51 smithi main centos 9.stream orch:cephadm/workunits/{0-distro/centos_9.stream_runc agent/off mon_election/classic task/test_ca_signed_key} 2
pass 7699533 2024-05-09 03:11:06 2024-05-09 03:57:18 2024-05-09 04:28:57 0:31:39 0:22:53 0:08:46 smithi main ubuntu 22.04 orch:cephadm/smoke-roleless/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-services/nfs-ingress2 3-final} 2
fail 7699534 2024-05-09 03:11:07 2024-05-09 03:57:18 2024-05-09 05:16:41 1:19:23 1:08:53 0:10:30 smithi main centos 9.stream orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mgr mon osd} fail_fs/yes overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn pg_health syntax} roles tasks/{0-from/reef/{v18.2.0} 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-inline/yes 4-verify} 2-client/fuse 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} 2
Failure Reason:

reached maximum tries (51) after waiting for 300 seconds

fail 7699535 2024-05-09 03:11:08 2024-05-09 03:57:39 2024-05-09 04:21:18 0:23:39 0:14:01 0:09:38 smithi main centos 9.stream orch:cephadm/smoke-roleless/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-services/nfs-keepalive-only 3-final} 2
Failure Reason:

"2024-05-09T04:17:50.743314+0000 mon.smithi072 (mon.0) 859 : cluster [WRN] Health check failed: Failed to place 1 daemon(s) ["Failed while placing nfs.foo.0.0.smithi072.slbmsv on smithi072: grace tool failed: rados_pool_create: -1\nCan't connect to cluster: -1\n"] (CEPHADM_DAEMON_PLACE_FAIL)" in cluster log

pass 7699536 2024-05-09 03:11:09 2024-05-09 03:57:39 2024-05-09 04:19:58 0:22:19 0:10:25 0:11:54 smithi main centos 9.stream orch:cephadm/smoke-small/{0-distro/centos_9.stream_runc 0-nvme-loop agent/off fixed-2 mon_election/connectivity start} 3
pass 7699537 2024-05-09 03:11:10 2024-05-09 03:58:40 2024-05-09 04:28:02 0:29:22 0:19:34 0:09:48 smithi main ubuntu 22.04 orch:cephadm/workunits/{0-distro/ubuntu_22.04 agent/on mon_election/connectivity task/test_cephadm} 1
pass 7699538 2024-05-09 03:11:11 2024-05-09 03:58:40 2024-05-09 04:29:24 0:30:44 0:19:00 0:11:44 smithi main ubuntu 22.04 orch:cephadm/osds/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-ops/rm-zap-wait} 2
pass 7699539 2024-05-09 03:11:12 2024-05-09 03:59:21 2024-05-09 04:28:09 0:28:48 0:17:10 0:11:38 smithi main ubuntu 22.04 orch:cephadm/smb/{0-distro/ubuntu_22.04 tasks/deploy_smb_mgr_res_basic} 2
fail 7699540 2024-05-09 03:11:13 2024-05-09 04:00:21 2024-05-09 04:23:34 0:23:13 0:14:26 0:08:47 smithi main centos 9.stream orch:cephadm/smoke-roleless/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-services/nfs 3-final} 2
Failure Reason:

"2024-05-09T04:19:59.008389+0000 mon.smithi003 (mon.0) 801 : cluster [WRN] Health check failed: Failed to place 1 daemon(s) ["Failed while placing nfs.foo.0.0.smithi003.jzsqxg on smithi003: grace tool failed: rados_pool_create: -1\nCan't connect to cluster: -1\nterminate called after throwing an instance of 'std::bad_variant_access'\n what(): std::get: wrong index for variant\n"] (CEPHADM_DAEMON_PLACE_FAIL)" in cluster log

pass 7699541 2024-05-09 03:11:14 2024-05-09 04:00:31 2024-05-09 04:51:43 0:51:12 0:38:23 0:12:49 smithi main ubuntu 22.04 orch:cephadm/no-agent-workunits/{0-distro/ubuntu_22.04 mon_election/connectivity task/test_orch_cli_mon} 5
pass 7699542 2024-05-09 03:11:15 2024-05-09 04:03:12 2024-05-09 04:35:11 0:31:59 0:19:24 0:12:35 smithi main ubuntu 22.04 orch:cephadm/smoke-roleless/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-services/nfs2 3-final} 2
fail 7699543 2024-05-09 03:11:16 2024-05-09 04:04:43 2024-05-09 04:19:46 0:15:03 0:06:20 0:08:43 smithi main centos 9.stream orch:cephadm/thrash/{0-distro/centos_9.stream_runc 1-start 2-thrash 3-tasks/small-objects fixed-2 msgr/async-v2only root} 2
Failure Reason:

Command failed on smithi089 with status 1: 'sudo yum -y install ceph-fuse'

pass 7699544 2024-05-09 03:11:17 2024-05-09 04:04:53 2024-05-09 04:38:11 0:33:18 0:21:54 0:11:24 smithi main centos 9.stream orch:cephadm/with-work/{0-distro/centos_9.stream fixed-2 mode/packaged mon_election/classic msgr/async-v2only start tasks/rados_python} 2
pass 7699545 2024-05-09 03:11:18 2024-05-09 04:07:04 2024-05-09 04:23:48 0:16:44 0:06:49 0:09:55 smithi main centos 9.stream orch:cephadm/workunits/{0-distro/centos_9.stream agent/off mon_election/classic task/test_cephadm_repos} 1
pass 7699546 2024-05-09 03:11:19 2024-05-09 04:07:04 2024-05-09 04:50:09 0:43:05 0:32:13 0:10:52 smithi main centos 9.stream orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mgr mon osd} fail_fs/no overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn pg_health syntax} roles tasks/{0-from/quincy 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-inline/no 4-verify} 2-client/kclient 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} 2
pass 7699547 2024-05-09 03:11:21 2024-05-09 04:07:15 2024-05-09 04:30:10 0:22:55 0:12:38 0:10:17 smithi main centos 9.stream orch:cephadm/smoke-roleless/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-services/nvmeof 3-final} 2
fail 7699548 2024-05-09 03:11:22 2024-05-09 04:07:15 2024-05-09 05:12:42 1:05:27 0:53:26 0:12:01 smithi main centos 9.stream orch:cephadm/upgrade/{1-start-distro/1-start-centos_9.stream 2-repo_digest/defaut 3-upgrade/staggered 4-wait 5-upgrade-ls agent/on mon_election/classic} 2
Failure Reason:

"2024-05-09T04:39:43.655879+0000 mon.a (mon.0) 1322 : cluster [WRN] Health check failed: 1 failed cephadm daemon(s) ['daemon mgr.y on smithi117 is in unknown state'] (CEPHADM_FAILED_DAEMON)" in cluster log

pass 7699549 2024-05-09 03:11:23 2024-05-09 04:07:15 2024-05-09 04:30:20 0:23:05 0:13:06 0:09:59 smithi main centos 9.stream orch:cephadm/osds/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-ops/rmdir-reactivate} 2
pass 7699550 2024-05-09 03:11:24 2024-05-09 04:07:26 2024-05-09 04:28:26 0:21:00 0:10:57 0:10:03 smithi main centos 9.stream orch:cephadm/smb/{0-distro/centos_9.stream tasks/deploy_smb_mgr_res_dom} 2
fail 7699551 2024-05-09 03:11:25 2024-05-09 04:07:26 2024-05-09 04:41:03 0:33:37 0:23:47 0:09:50 smithi main ubuntu 22.04 orch:cephadm/smoke/{0-distro/ubuntu_22.04 0-nvme-loop agent/on fixed-2 mon_election/classic start} 2
Failure Reason:

"2024-05-09T04:36:57.578995+0000 mon.a (mon.0) 1124 : cluster [WRN] Health check failed: 2 Cephadm Agent(s) are not reporting. Hosts may be offline (CEPHADM_AGENT_DOWN)" in cluster log