Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
pass 7680679 2024-04-30 05:42:53 2024-04-30 09:03:00 2024-04-30 09:32:34 0:29:34 0:20:23 0:09:11 smithi main ubuntu 22.04 orch:cephadm/smoke-roleless/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-services/nfs-ingress-rgw-user 3-final} 2
pass 7680680 2024-04-30 05:42:54 2024-04-30 09:03:01 2024-04-30 09:26:10 0:23:09 0:14:56 0:08:13 smithi main centos 9.stream orch:cephadm/workunits/{0-distro/centos_9.stream agent/on mon_election/connectivity task/test_host_drain} 3
pass 7680681 2024-04-30 05:42:55 2024-04-30 09:03:01 2024-04-30 09:42:05 0:39:04 0:31:50 0:07:14 smithi main centos 9.stream orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mgr mon osd} fail_fs/no overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn pg_health syntax} roles tasks/{0-from/quincy 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-inline/no 4-verify} 2-client/kclient 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} 2
pass 7680682 2024-04-30 05:42:57 2024-04-30 09:03:01 2024-04-30 09:50:26 0:47:25 0:34:16 0:13:09 smithi main centos 9.stream orch:cephadm/mgr-nfs-upgrade/{0-centos_9.stream 1-bootstrap/17.2.0 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
fail 7680683 2024-04-30 05:42:58 2024-04-30 09:09:03 2024-04-30 10:07:29 0:58:26 0:46:34 0:11:52 smithi main ubuntu 22.04 orch:cephadm/nfs/{cluster/{1-node} conf/{client mds mgr mon osd} overrides/{ignorelist_health pg_health} supported-random-distros$/{ubuntu_latest} tasks/nfs} 1
Failure Reason:

"2024-04-30T09:35:04.519259+0000 mon.a (mon.0) 324 : cluster [WRN] Health check failed: no active mgr (MGR_DOWN)" in cluster log

pass 7680684 2024-04-30 05:42:59 2024-04-30 09:09:53 2024-04-30 09:28:59 0:19:06 0:12:31 0:06:35 smithi main centos 9.stream orch:cephadm/no-agent-workunits/{0-distro/centos_9.stream mon_election/classic task/test_orch_cli} 1
pass 7680685 2024-04-30 05:43:00 2024-04-30 09:09:53 2024-04-30 09:27:14 0:17:21 0:11:04 0:06:17 smithi main centos 9.stream orch:cephadm/orchestrator_cli/{0-random-distro$/{centos_9.stream} 2-node-mgr agent/off orchestrator_cli} 2
pass 7680686 2024-04-30 05:43:01 2024-04-30 09:09:54 2024-04-30 09:38:15 0:28:21 0:18:53 0:09:28 smithi main ubuntu 22.04 orch:cephadm/osds/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-ops/rm-zap-flag} 2
fail 7680687 2024-04-30 05:43:02 2024-04-30 09:10:44 2024-04-30 09:44:03 0:33:19 0:27:11 0:06:08 smithi main centos 9.stream orch:cephadm/rbd_iscsi/{0-single-container-host base/install cluster/{fixed-3 openstack} conf/{disable-pool-app} workloads/cephadm_iscsi} 3
Failure Reason:

"2024-04-30T09:24:10.861492+0000 mon.a (mon.0) 204 : cluster [WRN] Health check failed: 1/3 mons down, quorum a,c (MON_DOWN)" in cluster log

pass 7680688 2024-04-30 05:43:03 2024-04-30 09:10:45 2024-04-30 09:31:00 0:20:15 0:11:45 0:08:30 smithi main centos 9.stream orch:cephadm/smb/{0-distro/centos_9.stream tasks/deploy_smb_basic} 2
fail 7680689 2024-04-30 05:43:04 2024-04-30 09:10:45 2024-04-30 09:33:45 0:23:00 0:15:17 0:07:43 smithi main centos 9.stream orch:cephadm/smoke-roleless/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-services/nfs-ingress 3-final} 2
Failure Reason:

"2024-04-30T09:27:16.210316+0000 mon.smithi154 (mon.0) 862 : cluster [WRN] Health check failed: Failed to place 2 daemon(s) ["Failed while placing nfs.foo.0.0.smithi154.lkgxmu on smithi154: grace tool failed: rados_pool_create: -1\nCan't connect to cluster: -1\nterminate called after throwing an instance of 'std::bad_variant_access'\n what(): std::get: wrong index for variant\n", "Failed while placing nfs.foo.1.0.smithi173.vmdtpd on smithi173: grace tool failed: rados_pool_create: -1\nCan't connect to cluster: -1\nterminate called after throwing an instance of 'std::bad_variant_access'\n what(): std::get: wrong index for variant\n"] (CEPHADM_DAEMON_PLACE_FAIL)" in cluster log

pass 7680690 2024-04-30 05:43:05 2024-04-30 09:10:45 2024-04-30 09:27:39 0:16:54 0:10:38 0:06:16 smithi main centos 9.stream orch:cephadm/smoke-singlehost/{0-random-distro$/{centos_9.stream_runc} 1-start 2-services/basic 3-final} 1
pass 7680691 2024-04-30 05:43:06 2024-04-30 09:10:46 2024-04-30 09:30:34 0:19:48 0:10:31 0:09:17 smithi main centos 9.stream orch:cephadm/smoke-small/{0-distro/centos_9.stream_runc 0-nvme-loop agent/off fixed-2 mon_election/classic start} 3
fail 7680692 2024-04-30 05:43:07 2024-04-30 09:13:07 2024-04-30 10:09:53 0:56:46 0:47:55 0:08:51 smithi main centos 9.stream orch:cephadm/upgrade/{1-start-distro/1-start-centos_9.stream 2-repo_digest/defaut 3-upgrade/staggered 4-wait 5-upgrade-ls agent/off mon_election/classic} 2
Failure Reason:

"2024-04-30T09:52:29.457352+0000 mon.a (mon.0) 1148 : cluster [WRN] Health check failed: 1 failed cephadm daemon(s) ['daemon iscsi.foo.smithi026.romfrr on smithi026 is in unknown state'] (CEPHADM_FAILED_DAEMON)" in cluster log

pass 7680693 2024-04-30 05:43:08 2024-04-30 09:13:17 2024-04-30 09:58:52 0:45:35 0:37:54 0:07:41 smithi main centos 9.stream orch:cephadm/thrash/{0-distro/centos_9.stream_runc 1-start 2-thrash 3-tasks/snaps-few-objects fixed-2 msgr/async-v2only root} 2
pass 7680694 2024-04-30 05:43:09 2024-04-30 09:13:27 2024-04-30 09:41:07 0:27:40 0:21:08 0:06:32 smithi main centos 9.stream orch:cephadm/with-work/{0-distro/centos_9.stream_runc fixed-2 mode/root mon_election/connectivity msgr/async start tasks/rados_python} 2
pass 7680695 2024-04-30 05:43:10 2024-04-30 09:13:58 2024-04-30 09:35:06 0:21:08 0:13:50 0:07:18 smithi main centos 9.stream orch:cephadm/workunits/{0-distro/centos_9.stream_runc agent/off mon_election/classic task/test_iscsi_container/{centos_9.stream test_iscsi_container}} 1
fail 7680696 2024-04-30 05:43:11 2024-04-30 09:13:58 2024-04-30 09:37:40 0:23:42 0:15:23 0:08:19 smithi main centos 9.stream orch:cephadm/smoke-roleless/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-services/nfs-ingress2 3-final} 2
Failure Reason:

"2024-04-30T09:33:07.907235+0000 mon.smithi045 (mon.0) 988 : cluster [WRN] Health check failed: 1 failed cephadm daemon(s) ['daemon nfs.foo.0.0.smithi045.ublgrk on smithi045 is in unknown state'] (CEPHADM_FAILED_DAEMON)" in cluster log

fail 7680697 2024-04-30 05:43:12 2024-04-30 09:14:19 2024-04-30 09:48:37 0:34:18 0:19:50 0:14:28 smithi main ubuntu 22.04 orch:cephadm/smoke-roleless/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-services/nfs-keepalive-only 3-final} 2
Failure Reason:

"2024-04-30T09:44:36.524934+0000 mon.smithi078 (mon.0) 822 : cluster [WRN] Health check failed: Failed to place 1 daemon(s) ["Failed while placing nfs.foo.0.0.smithi078.vperxy on smithi078: grace tool failed: rados_pool_create: -1\nCan't connect to cluster: -1\nterminate called after throwing an instance of 'std::bad_variant_access'\n what(): std::get: wrong index for variant\n"] (CEPHADM_DAEMON_PLACE_FAIL)" in cluster log

pass 7680698 2024-04-30 05:43:13 2024-04-30 09:18:10 2024-04-30 09:37:38 0:19:28 0:11:47 0:07:41 smithi main centos 9.stream orch:cephadm/osds/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-ops/rm-zap-wait} 2
fail 7680699 2024-04-30 05:43:14 2024-04-30 09:18:30 2024-04-30 09:53:09 0:34:39 0:24:46 0:09:53 smithi main ubuntu 22.04 orch:cephadm/smoke/{0-distro/ubuntu_22.04 0-nvme-loop agent/on fixed-2 mon_election/connectivity start} 2
Failure Reason:

"2024-04-30T09:49:17.720979+0000 mon.a (mon.0) 1148 : cluster [WRN] Health check failed: 1 Cephadm Agent(s) are not reporting. Hosts may be offline (CEPHADM_AGENT_DOWN)" in cluster log

pass 7680700 2024-04-30 05:43:15 2024-04-30 09:19:20 2024-04-30 09:56:40 0:37:20 0:25:16 0:12:04 smithi main ubuntu 22.04 orch:cephadm/workunits/{0-distro/ubuntu_22.04 agent/on mon_election/connectivity task/test_monitoring_stack_basic} 3
fail 7680701 2024-04-30 05:43:16 2024-04-30 09:20:51 2024-04-30 10:35:21 1:14:30 1:07:03 0:07:27 smithi main centos 9.stream orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mgr mon osd} fail_fs/yes overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn pg_health syntax} roles tasks/{0-from/reef/{v18.2.1} 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-inline/yes 4-verify} 2-client/fuse 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} 2
Failure Reason:

reached maximum tries (51) after waiting for 300 seconds

fail 7680702 2024-04-30 05:43:17 2024-04-30 09:22:12 2024-04-30 09:43:16 0:21:04 0:13:34 0:07:30 smithi main centos 9.stream orch:cephadm/smoke-roleless/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-services/nfs 3-final} 2
Failure Reason:

"2024-04-30T09:38:22.867847+0000 mon.smithi121 (mon.0) 790 : cluster [WRN] Health check failed: Failed to place 1 daemon(s) ["Failed while placing nfs.foo.0.0.smithi121.otgnfd on smithi121: grace tool failed: rados_pool_create: -1\nCan't connect to cluster: -1\nterminate called after throwing an instance of 'std::bad_variant_access'\n what(): std::get: wrong index for variant\n"] (CEPHADM_DAEMON_PLACE_FAIL)" in cluster log

pass 7680703 2024-04-30 05:43:18 2024-04-30 09:22:22 2024-04-30 09:57:35 0:35:13 0:23:56 0:11:17 smithi main centos 9.stream orch:cephadm/no-agent-workunits/{0-distro/centos_9.stream_runc mon_election/connectivity task/test_orch_cli_mon} 5
pass 7680704 2024-04-30 05:43:19 2024-04-30 09:26:13 2024-04-30 09:43:07 0:16:54 0:10:38 0:06:16 smithi main centos 9.stream orch:cephadm/smb/{0-distro/centos_9.stream_runc tasks/deploy_smb_domain} 2
pass 7680705 2024-04-30 05:43:20 2024-04-30 09:26:13 2024-04-30 09:45:28 0:19:15 0:12:55 0:06:20 smithi main centos 9.stream orch:cephadm/smoke-roleless/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-services/nfs2 3-final} 2
pass 7680706 2024-04-30 05:43:21 2024-04-30 09:26:34 2024-04-30 10:24:19 0:57:45 0:48:14 0:09:31 smithi main ubuntu 22.04 orch:cephadm/thrash/{0-distro/ubuntu_22.04 1-start 2-thrash 3-tasks/rados_api_tests fixed-2 msgr/async root} 2
dead 7680707 2024-04-30 05:43:22 2024-04-30 09:26:34 2024-04-30 09:27:39 0:01:05 smithi main ubuntu 22.04 orch:cephadm/with-work/{0-distro/ubuntu_22.04 fixed-2 mode/packaged mon_election/classic msgr/async-v1only start tasks/rotate-keys} 2
Failure Reason:

Error reimaging machines: Failed to power on smithi191

pass 7680708 2024-04-30 05:43:23 2024-04-30 09:26:34 2024-04-30 09:47:29 0:20:55 0:13:29 0:07:26 smithi main centos 9.stream orch:cephadm/workunits/{0-distro/centos_9.stream agent/off mon_election/classic task/test_rgw_multisite} 3
pass 7680709 2024-04-30 05:43:24 2024-04-30 09:26:35 2024-04-30 09:54:06 0:27:31 0:18:48 0:08:43 smithi main ubuntu 22.04 orch:cephadm/smoke-roleless/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-services/nvmeof 3-final} 2
pass 7680710 2024-04-30 05:43:25 2024-04-30 09:26:35 2024-04-30 09:46:33 0:19:58 0:12:47 0:07:11 smithi main centos 9.stream orch:cephadm/osds/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-ops/rmdir-reactivate} 2
pass 7680711 2024-04-30 05:43:26 2024-04-30 09:27:16 2024-04-30 10:06:40 0:39:24 0:32:32 0:06:52 smithi main centos 9.stream orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mgr mon osd} fail_fs/no overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn pg_health syntax} roles tasks/{0-from/quincy 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-inline/no 4-verify} 2-client/kclient 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} 2
pass 7680712 2024-04-30 05:43:27 2024-04-30 09:27:46 2024-04-30 09:50:59 0:23:13 0:16:16 0:06:57 smithi main centos 9.stream orch:cephadm/smoke-roleless/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-services/rgw-ingress 3-final} 2
fail 7680713 2024-04-30 05:43:29 2024-04-30 09:27:57 2024-04-30 09:47:58 0:20:01 0:10:35 0:09:26 smithi main centos 9.stream orch:cephadm/smoke-small/{0-distro/centos_9.stream_runc 0-nvme-loop agent/on fixed-2 mon_election/connectivity start} 3
Failure Reason:

"2024-04-30T09:45:18.231282+0000 mon.a (mon.0) 554 : cluster [WRN] Health check failed: 1 failed cephadm daemon(s) ['daemon osd.2 on smithi193 is in unknown state'] (CEPHADM_FAILED_DAEMON)" in cluster log

fail 7680714 2024-04-30 05:43:30 2024-04-30 09:30:07 2024-04-30 10:22:32 0:52:25 0:40:51 0:11:34 smithi main ubuntu 22.04 orch:cephadm/upgrade/{1-start-distro/1-start-ubuntu_22.04 2-repo_digest/repo_digest 3-upgrade/simple 4-wait 5-upgrade-ls agent/on mon_election/connectivity} 2
Failure Reason:

"2024-04-30T09:52:14.639468+0000 mon.a (mon.0) 433 : cluster [WRN] Health check failed: 1 stray daemon(s) not managed by cephadm (CEPHADM_STRAY_DAEMON)" in cluster log

fail 7680715 2024-04-30 05:43:31 2024-04-30 09:30:38 2024-04-30 09:55:32 0:24:54 0:15:19 0:09:35 smithi main centos 9.stream orch:cephadm/workunits/{0-distro/centos_9.stream_runc agent/on mon_election/connectivity task/test_set_mon_crush_locations} 3
Failure Reason:

"2024-04-30T09:49:08.056166+0000 mon.a (mon.0) 425 : cluster [WRN] Health check failed: 1 failed cephadm daemon(s) ['daemon osd.1 on smithi088 is in unknown state'] (CEPHADM_FAILED_DAEMON)" in cluster log

pass 7680716 2024-04-30 05:43:32 2024-04-30 09:31:08 2024-04-30 09:53:52 0:22:44 0:14:41 0:08:03 smithi main centos 9.stream orch:cephadm/smoke-roleless/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-services/rgw 3-final} 2
pass 7680717 2024-04-30 05:43:33 2024-04-30 09:32:29 2024-04-30 09:54:02 0:21:33 0:12:49 0:08:44 smithi main ubuntu 22.04 orch:cephadm/no-agent-workunits/{0-distro/ubuntu_22.04 mon_election/classic task/test_adoption} 1
pass 7680718 2024-04-30 05:43:34 2024-04-30 09:32:29 2024-04-30 09:51:41 0:19:12 0:12:07 0:07:05 smithi main centos 9.stream orch:cephadm/osds/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-ops/deploy-raw} 2
pass 7680719 2024-04-30 05:43:35 2024-04-30 09:32:39 2024-04-30 10:01:42 0:29:03 0:18:20 0:10:43 smithi main ubuntu 22.04 orch:cephadm/smb/{0-distro/ubuntu_22.04 tasks/deploy_smb_basic} 2
fail 7680720 2024-04-30 05:43:36 2024-04-30 09:33:30 2024-04-30 09:56:04 0:22:34 0:13:57 0:08:37 smithi main centos 9.stream orch:cephadm/smoke/{0-distro/centos_9.stream 0-nvme-loop agent/on fixed-2 mon_election/classic start} 2
Failure Reason:

"2024-04-30T09:48:16.635783+0000 mon.a (mon.0) 334 : cluster [WRN] Health check failed: 1 failed cephadm daemon(s) ['daemon osd.0 on smithi028 is in unknown state'] (CEPHADM_FAILED_DAEMON)" in cluster log

pass 7680721 2024-04-30 05:43:37 2024-04-30 09:35:10 2024-04-30 10:06:33 0:31:23 0:20:11 0:11:12 smithi main ubuntu 22.04 orch:cephadm/smoke-roleless/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-services/basic 3-final} 2
pass 7680722 2024-04-30 05:43:38 2024-04-30 09:36:11 2024-04-30 10:27:32 0:51:21 0:44:42 0:06:39 smithi main centos 9.stream orch:cephadm/thrash/{0-distro/centos_9.stream 1-start 2-thrash 3-tasks/radosbench fixed-2 msgr/async-v1only root} 2
pass 7680723 2024-04-30 05:43:39 2024-04-30 09:36:21 2024-04-30 10:33:23 0:57:02 0:45:31 0:11:31 smithi main ubuntu 22.04 orch:cephadm/with-work/{0-distro/ubuntu_22.04 fixed-2 mode/root mon_election/connectivity msgr/async-v1only start tasks/rados_api_tests} 2
pass 7680724 2024-04-30 05:43:40 2024-04-30 09:37:42 2024-04-30 09:55:30 0:17:48 0:10:41 0:07:07 smithi main centos 9.stream orch:cephadm/workunits/{0-distro/centos_9.stream_runc agent/off mon_election/classic task/test_ca_signed_key} 2
fail 7680725 2024-04-30 05:43:41 2024-04-30 09:38:36 2024-04-30 10:53:38 1:15:02 1:06:54 0:08:08 smithi main centos 9.stream orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mgr mon osd} fail_fs/yes overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn pg_health syntax} roles tasks/{0-from/reef/{v18.2.0} 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-inline/yes 4-verify} 2-client/fuse 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} 2
Failure Reason:

reached maximum tries (51) after waiting for 300 seconds

pass 7680726 2024-04-30 05:43:42 2024-04-30 09:40:27 2024-04-30 09:59:40 0:19:13 0:12:38 0:06:35 smithi main centos 9.stream orch:cephadm/smoke-roleless/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-services/client-keyring 3-final} 2
pass 7680727 2024-04-30 05:43:43 2024-04-30 09:40:27 2024-04-30 10:10:38 0:30:11 0:19:14 0:10:57 smithi main ubuntu 22.04 orch:cephadm/workunits/{0-distro/ubuntu_22.04 agent/on mon_election/connectivity task/test_cephadm} 1
pass 7680728 2024-04-30 05:43:44 2024-04-30 09:41:08 2024-04-30 10:01:35 0:20:27 0:12:53 0:07:34 smithi main centos 9.stream orch:cephadm/smoke-roleless/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-services/iscsi 3-final} 2
pass 7680729 2024-04-30 05:43:45 2024-04-30 09:42:08 2024-04-30 10:09:40 0:27:32 0:17:57 0:09:35 smithi main ubuntu 22.04 orch:cephadm/osds/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-ops/repave-all} 2
fail 7680730 2024-04-30 05:43:46 2024-04-30 09:42:09 2024-04-30 10:13:30 0:31:21 0:21:23 0:09:58 smithi main ubuntu 22.04 orch:cephadm/smoke-roleless/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-services/jaeger 3-final} 2
Failure Reason:

"2024-04-30T10:07:07.828262+0000 mon.smithi033 (mon.0) 793 : cluster [WRN] Health check failed: 2 failed cephadm daemon(s) ['daemon jaeger-agent.smithi033 on smithi033 is in unknown state', 'daemon jaeger-agent.smithi134 on smithi134 is in unknown state'] (CEPHADM_FAILED_DAEMON)" in cluster log

pass 7680731 2024-04-30 05:43:47 2024-04-30 09:42:09 2024-04-30 10:19:14 0:37:05 0:30:14 0:06:51 smithi main centos 9.stream orch:cephadm/thrash/{0-distro/centos_9.stream_runc 1-start 2-thrash 3-tasks/small-objects fixed-2 msgr/async-v2only root} 2
pass 7680732 2024-04-30 05:43:48 2024-04-30 09:42:20 2024-04-30 10:11:30 0:29:10 0:23:27 0:05:43 smithi main centos 9.stream orch:cephadm/with-work/{0-distro/centos_9.stream fixed-2 mode/packaged mon_election/classic msgr/async-v2only start tasks/rados_python} 2
pass 7680733 2024-04-30 05:43:49 2024-04-30 09:42:20 2024-04-30 09:56:57 0:14:37 0:06:20 0:08:17 smithi main centos 9.stream orch:cephadm/workunits/{0-distro/centos_9.stream agent/off mon_election/classic task/test_cephadm_repos} 1
pass 7680734 2024-04-30 05:43:50 2024-04-30 09:42:20 2024-04-30 10:23:26 0:41:06 0:33:41 0:07:25 smithi main centos 9.stream orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mgr mon osd} fail_fs/no overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn pg_health syntax} roles tasks/{0-from/quincy 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-inline/no 4-verify} 2-client/kclient 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} 2
fail 7680735 2024-04-30 05:43:51 2024-04-30 09:42:31 2024-04-30 10:05:22 0:22:51 0:15:30 0:07:21 smithi main centos 9.stream orch:cephadm/no-agent-workunits/{0-distro/centos_9.stream mon_election/connectivity task/test_cephadm_timeout} 1
Failure Reason:

"2024-04-30T10:02:30.481890+0000 mon.a (mon.0) 211 : cluster [WRN] Health check failed: failed to probe daemons or devices (CEPHADM_REFRESH_FAILED)" in cluster log

pass 7680736 2024-04-30 05:43:52 2024-04-30 09:42:31 2024-04-30 10:06:00 0:23:29 0:14:03 0:09:26 smithi main ubuntu 22.04 orch:cephadm/orchestrator_cli/{0-random-distro$/{ubuntu_22.04} 2-node-mgr agent/on orchestrator_cli} 2
pass 7680737 2024-04-30 05:43:53 2024-04-30 09:42:31 2024-04-30 10:02:05 0:19:34 0:12:07 0:07:27 smithi main centos 9.stream orch:cephadm/smb/{0-distro/centos_9.stream tasks/deploy_smb_domain} 2
pass 7680738 2024-04-30 05:43:54 2024-04-30 09:42:32 2024-04-30 10:03:59 0:21:27 0:13:46 0:07:41 smithi main centos 9.stream orch:cephadm/smoke-roleless/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-services/mirror 3-final} 2
pass 7680739 2024-04-30 05:43:55 2024-04-30 09:42:32 2024-04-30 10:09:59 0:27:27 0:18:39 0:08:48 smithi main ubuntu 22.04 orch:cephadm/smoke-singlehost/{0-random-distro$/{ubuntu_22.04} 1-start 2-services/rgw 3-final} 1
pass 7680740 2024-04-30 05:43:56 2024-04-30 09:42:32 2024-04-30 10:02:05 0:19:33 0:11:39 0:07:54 smithi main centos 9.stream orch:cephadm/smoke-small/{0-distro/centos_9.stream_runc 0-nvme-loop agent/on fixed-2 mon_election/classic start} 3
fail 7680741 2024-04-30 05:43:57 2024-04-30 09:42:33 2024-04-30 10:34:25 0:51:52 0:44:32 0:07:20 smithi main centos 9.stream orch:cephadm/upgrade/{1-start-distro/1-start-centos_9.stream 2-repo_digest/defaut 3-upgrade/staggered 4-wait 5-upgrade-ls agent/on mon_election/classic} 2
Failure Reason:

Command failed on smithi005 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 6cc3f5e2-06d7-11ef-bc93-c7b262605968 -e sha1=3a873b0ce83192ee05b56734dad1076a7a94ecc7 -- bash -c \'ceph orch upgrade check quay.ceph.io/ceph-ci/ceph:$sha1 | jq -e \'"\'"\'.up_to_date | length == 7\'"\'"\'\''

pass 7680742 2024-04-30 05:43:58 2024-04-30 09:42:43 2024-04-30 10:04:22 0:21:39 0:14:58 0:06:41 smithi main centos 9.stream orch:cephadm/smoke-roleless/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-services/nfs-haproxy-proto 3-final} 2
pass 7680743 2024-04-30 05:43:59 2024-04-30 09:43:14 2024-04-30 10:05:52 0:22:38 0:14:04 0:08:34 smithi main centos 9.stream orch:cephadm/osds/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-ops/rm-zap-add} 2
pass 7680744 2024-04-30 05:44:00 2024-04-30 09:44:34 2024-04-30 10:07:53 0:23:19 0:15:13 0:08:06 smithi main centos 9.stream orch:cephadm/smoke/{0-distro/centos_9.stream_runc 0-nvme-loop agent/off fixed-2 mon_election/connectivity start} 2
fail 7680745 2024-04-30 05:44:01 2024-04-30 09:44:45 2024-04-30 10:06:42 0:21:57 0:13:18 0:08:39 smithi main centos 9.stream orch:cephadm/workunits/{0-distro/centos_9.stream_runc agent/on mon_election/connectivity task/test_extra_daemon_features} 2
Failure Reason:

"2024-04-30T10:01:32.903833+0000 mon.a (mon.0) 344 : cluster [WRN] Health check failed: 1 failed cephadm daemon(s) ['daemon osd.1 on smithi064 is in unknown state'] (CEPHADM_FAILED_DAEMON)" in cluster log

pass 7680746 2024-04-30 05:44:02 2024-04-30 09:45:35 2024-04-30 10:16:24 0:30:49 0:20:11 0:10:38 smithi main ubuntu 22.04 orch:cephadm/smoke-roleless/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-services/nfs-ingress-rgw-bucket 3-final} 2
fail 7680747 2024-04-30 05:44:03 2024-04-30 09:46:36 2024-04-30 10:32:45 0:46:09 0:37:44 0:08:25 smithi main centos 9.stream orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mgr mon osd} fail_fs/yes overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn pg_health syntax} roles tasks/{0-from/reef/{v18.2.0} 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-inline/yes 4-verify} 2-client/fuse 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} 2
Failure Reason:

reached maximum tries (51) after waiting for 300 seconds

dead 7680748 2024-04-30 05:44:04 2024-04-30 09:47:36 2024-04-30 21:57:17 12:09:41 smithi main centos 9.stream orch:cephadm/smoke-roleless/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-services/nfs-ingress-rgw-user 3-final} 2
Failure Reason:

hit max job timeout

pass 7680749 2024-04-30 05:44:05 2024-04-30 09:49:07 2024-04-30 10:46:48 0:57:41 0:47:17 0:10:24 smithi main ubuntu 22.04 orch:cephadm/thrash/{0-distro/ubuntu_22.04 1-start 2-thrash 3-tasks/snaps-few-objects fixed-2 msgr/async root} 2
pass 7680750 2024-04-30 05:44:06 2024-04-30 09:49:07 2024-04-30 10:19:45 0:30:38 0:23:10 0:07:28 smithi main centos 9.stream orch:cephadm/with-work/{0-distro/centos_9.stream_runc fixed-2 mode/root mon_election/connectivity msgr/async start tasks/rotate-keys} 2
pass 7680751 2024-04-30 05:44:07 2024-04-30 09:50:28 2024-04-30 10:23:28 0:33:00 0:22:56 0:10:04 smithi main ubuntu 22.04 orch:cephadm/workunits/{0-distro/ubuntu_22.04 agent/off mon_election/classic task/test_host_drain} 3
pass 7680752 2024-04-30 05:44:08 2024-04-30 09:51:48 2024-04-30 10:10:58 0:19:10 0:12:41 0:06:29 smithi main centos 9.stream orch:cephadm/no-agent-workunits/{0-distro/centos_9.stream_runc mon_election/classic task/test_orch_cli} 1
pass 7680753 2024-04-30 05:44:09 2024-04-30 09:51:48 2024-04-30 10:13:16 0:21:28 0:12:33 0:08:55 smithi main centos 9.stream orch:cephadm/osds/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-ops/rm-zap-flag} 2
pass 7680754 2024-04-30 05:44:10 2024-04-30 09:53:59 2024-04-30 10:13:10 0:19:11 0:12:48 0:06:23 smithi main centos 9.stream orch:cephadm/smb/{0-distro/centos_9.stream_runc tasks/deploy_smb_basic} 2
fail 7680755 2024-04-30 05:44:12 2024-04-30 09:54:10 2024-04-30 10:18:41 0:24:31 0:15:34 0:08:57 smithi main centos 9.stream orch:cephadm/smoke-roleless/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-services/nfs-ingress 3-final} 2
Failure Reason:

"2024-04-30T10:11:57.339116+0000 mon.smithi098 (mon.0) 863 : cluster [WRN] Health check failed: Failed to place 2 daemon(s) ["Failed while placing nfs.foo.0.0.smithi098.wmnuwi on smithi098: grace tool failed: rados_pool_create: -1\nCan't connect to cluster: -1\nterminate called after throwing an instance of 'std::bad_variant_access'\n what(): std::get: wrong index for variant\n", "Failed while placing nfs.foo.1.0.smithi156.lhiztu on smithi156: grace tool failed: rados_pool_create: -1\nCan't connect to cluster: -1\nterminate called after throwing an instance of 'std::bad_variant_access'\n what(): std::get: wrong index for variant\n"] (CEPHADM_DAEMON_PLACE_FAIL)" in cluster log

dead 7680756 2024-04-30 05:44:13 2024-04-30 09:55:40 2024-04-30 22:03:51 12:08:11 smithi main centos 9.stream orch:cephadm/workunits/{0-distro/centos_9.stream agent/on mon_election/connectivity task/test_iscsi_container/{centos_9.stream test_iscsi_container}} 1
Failure Reason:

hit max job timeout

pass 7680757 2024-04-30 05:44:14 2024-04-30 09:55:41 2024-04-30 10:28:27 0:32:46 0:22:48 0:09:58 smithi main ubuntu 22.04 orch:cephadm/smoke-roleless/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-services/nfs-ingress2 3-final} 2
pass 7680758 2024-04-30 05:44:15 2024-04-30 09:56:41 2024-04-30 10:36:06 0:39:25 0:32:17 0:07:08 smithi main centos 9.stream orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mgr mon osd} fail_fs/no overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn pg_health syntax} roles tasks/{0-from/quincy 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-inline/no 4-verify} 2-client/kclient 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} 2
fail 7680759 2024-04-30 05:44:16 2024-04-30 09:57:01 2024-04-30 10:18:45 0:21:44 0:13:20 0:08:24 smithi main centos 9.stream orch:cephadm/smoke-roleless/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-services/nfs-keepalive-only 3-final} 2
Failure Reason:

"2024-04-30T10:13:51.216550+0000 mon.smithi089 (mon.0) 844 : cluster [WRN] Health check failed: Failed to place 1 daemon(s) ["Failed while placing nfs.foo.0.0.smithi089.rbzhch on smithi089: grace tool failed: rados_pool_create: -1\nCan't connect to cluster: -1\n"] (CEPHADM_DAEMON_PLACE_FAIL)" in cluster log

pass 7680760 2024-04-30 05:44:17 2024-04-30 09:57:42 2024-04-30 10:18:29 0:20:47 0:11:21 0:09:26 smithi main centos 9.stream orch:cephadm/smoke-small/{0-distro/centos_9.stream_runc 0-nvme-loop agent/off fixed-2 mon_election/connectivity start} 3
pass 7680761 2024-04-30 05:44:18 2024-04-30 09:57:42 2024-04-30 10:45:36 0:47:54 0:38:36 0:09:18 smithi main ubuntu 22.04 orch:cephadm/upgrade/{1-start-distro/1-start-ubuntu_22.04 2-repo_digest/repo_digest 3-upgrade/simple 4-wait 5-upgrade-ls agent/off mon_election/connectivity} 2
pass 7680762 2024-04-30 05:44:19 2024-04-30 09:57:53 2024-04-30 10:29:21 0:31:28 0:17:56 0:13:32 smithi main ubuntu 22.04 orch:cephadm/osds/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-ops/rm-zap-wait} 2
fail 7680763 2024-04-30 05:44:20 2024-04-30 09:57:53 2024-04-30 10:31:30 0:33:37 0:24:34 0:09:03 smithi main ubuntu 22.04 orch:cephadm/smoke/{0-distro/ubuntu_22.04 0-nvme-loop agent/on fixed-2 mon_election/classic start} 2
Failure Reason:

"2024-04-30T10:27:25.430316+0000 mon.a (mon.0) 1122 : cluster [WRN] Health check failed: 1 Cephadm Agent(s) are not reporting. Hosts may be offline (CEPHADM_AGENT_DOWN)" in cluster log

pass 7680764 2024-04-30 05:44:21 2024-04-30 09:57:53 2024-04-30 10:56:40 0:58:47 0:47:29 0:11:18 smithi main ubuntu 22.04 orch:cephadm/thrash/{0-distro/ubuntu_22.04 1-start 2-thrash 3-tasks/rados_api_tests fixed-2 msgr/async-v1only root} 2
pass 7680765 2024-04-30 05:44:22 2024-04-30 09:57:54 2024-04-30 10:51:23 0:53:29 0:43:38 0:09:51 smithi main ubuntu 22.04 orch:cephadm/with-work/{0-distro/ubuntu_22.04 fixed-2 mode/packaged mon_election/classic msgr/async start tasks/rados_api_tests} 2
pass 7680766 2024-04-30 05:44:23 2024-04-30 09:57:54 2024-04-30 10:22:52 0:24:58 0:18:26 0:06:32 smithi main centos 9.stream orch:cephadm/workunits/{0-distro/centos_9.stream_runc agent/off mon_election/classic task/test_monitoring_stack_basic} 3
fail 7680767 2024-04-30 05:44:24 2024-04-30 09:57:55 2024-04-30 10:20:01 0:22:06 0:13:44 0:08:22 smithi main centos 9.stream orch:cephadm/smoke-roleless/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-services/nfs 3-final} 2
Failure Reason:

"2024-04-30T10:15:12.200552+0000 mon.smithi088 (mon.0) 784 : cluster [WRN] Health check failed: Failed to place 1 daemon(s) ["Failed while placing nfs.foo.0.0.smithi088.zdlnii on smithi088: grace tool failed: rados_pool_create: -1\nCan't connect to cluster: -1\n"] (CEPHADM_DAEMON_PLACE_FAIL)" in cluster log

pass 7680768 2024-04-30 05:44:25 2024-04-30 09:57:55 2024-04-30 10:46:09 0:48:14 0:37:16 0:10:58 smithi main ubuntu 22.04 orch:cephadm/no-agent-workunits/{0-distro/ubuntu_22.04 mon_election/connectivity task/test_orch_cli_mon} 5
pass 7680769 2024-04-30 05:44:26 2024-04-30 09:58:05 2024-04-30 10:26:18 0:28:13 0:17:55 0:10:18 smithi main ubuntu 22.04 orch:cephadm/smb/{0-distro/ubuntu_22.04 tasks/deploy_smb_domain} 2
pass 7680770 2024-04-30 05:44:27 2024-04-30 09:58:06 2024-04-30 10:28:41 0:30:35 0:18:48 0:11:47 smithi main ubuntu 22.04 orch:cephadm/smoke-roleless/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-services/nfs2 3-final} 2
pass 7680771 2024-04-30 05:44:28 2024-04-30 09:58:06 2024-04-30 10:29:36 0:31:30 0:19:58 0:11:32 smithi main ubuntu 22.04 orch:cephadm/workunits/{0-distro/ubuntu_22.04 agent/on mon_election/connectivity task/test_rgw_multisite} 3
fail 7680772 2024-04-30 05:44:29 2024-04-30 09:59:47 2024-04-30 10:46:57 0:47:10 0:37:16 0:09:54 smithi main centos 9.stream orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mgr mon osd} fail_fs/yes overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn pg_health syntax} roles tasks/{0-from/reef/{v18.2.1} 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-inline/yes 4-verify} 2-client/fuse 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} 2
Failure Reason:

reached maximum tries (51) after waiting for 300 seconds

fail 7680773 2024-04-30 05:44:30 2024-04-30 10:01:37 2024-05-01 11:35:09 1 day, 1:33:32 19:34:15 5:59:17 smithi main centos 9.stream orch:cephadm/smoke-roleless/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-services/nvmeof 3-final} 2
Failure Reason:

Command failed on smithi148 with status 127: "sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:3a873b0ce83192ee05b56734dad1076a7a94ecc7 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 6837909a-070b-11ef-bc93-c7b262605968 -- bash -c 'fail for debug testing'"

pass 7680774 2024-04-30 05:44:31 2024-04-30 10:01:48 2024-04-30 10:21:16 0:19:28 0:12:59 0:06:29 smithi main centos 9.stream orch:cephadm/osds/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-ops/rmdir-reactivate} 2
pass 7680775 2024-04-30 05:44:33 2024-04-30 10:02:08 2024-04-30 10:25:27 0:23:19 0:16:34 0:06:45 smithi main centos 9.stream orch:cephadm/smoke-roleless/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-services/rgw-ingress 3-final} 2
pass 7680776 2024-04-30 05:44:34 2024-04-30 10:02:09 2024-04-30 11:05:15 1:03:06 0:55:48 0:07:18 smithi main centos 9.stream orch:cephadm/thrash/{0-distro/centos_9.stream 1-start 2-thrash 3-tasks/radosbench fixed-2 msgr/async-v2only root} 2
pass 7680777 2024-04-30 05:44:35 2024-04-30 10:02:09 2024-04-30 10:31:37 0:29:28 0:21:56 0:07:32 smithi main centos 9.stream orch:cephadm/with-work/{0-distro/centos_9.stream fixed-2 mode/root mon_election/connectivity msgr/async-v1only start tasks/rados_python} 2