Name | Machine Type | Up | Locked | Locked Since | Locked By | OS Type | OS Version | Arch | Description |
---|---|---|---|---|---|---|---|---|---|
smithi120.front.sepia.ceph.com | smithi | True | False | ubuntu | 22.04 | x86_64 | /home/teuthworker/archive/teuthology-2024-04-22_20:08:13-orch-main-distro-default-smithi/7668733 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
pass | 7669462 | 2024-04-23 01:24:12 | 2024-04-23 01:30:13 | 2024-04-23 01:56:46 | 0:26:33 | 0:18:53 | 0:07:40 | smithi | main | centos | 9.stream | crimson-rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore thrashers/simple thrashosds-health workloads/small-objects-balanced} | 2 | |
fail | 7669246 | 2024-04-22 22:47:01 | 2024-04-22 23:24:01 | 2024-04-23 00:02:59 | 0:38:58 | 0:27:49 | 0:11:09 | smithi | main | ubuntu | 22.04 | orch:cephadm/smoke-roleless/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-services/nfs-ingress2 3-final} | 2 | |
Failure Reason:
"2024-04-22T23:44:34.768270+0000 mon.smithi046 (mon.0) 121 : cluster [WRN] Health check failed: failed to probe daemons or devices (CEPHADM_REFRESH_FAILED)" in cluster log |
||||||||||||||
fail | 7669222 | 2024-04-22 22:46:36 | 2024-04-22 23:10:40 | 2024-04-22 23:23:30 | 0:12:50 | 0:05:39 | 0:07:11 | smithi | main | centos | 9.stream | orch:cephadm/workunits/{0-distro/centos_9.stream agent/off mon_election/classic task/test_cephadm_repos} | 1 | |
Failure Reason:
Command failed (workunit test cephadm/test_repos.sh) on smithi120 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=43be020184947e53516056c9931e1ac5bdbbb1a5 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_repos.sh' |
||||||||||||||
pass | 7669125 | 2024-04-22 22:10:47 | 2024-04-23 01:56:55 | 2024-04-23 02:21:58 | 0:25:03 | 0:16:45 | 0:08:18 | smithi | main | centos | 8.stream | orch/cephadm/no-agent-workunits/{0-distro/centos_8.stream_container_tools_crun mon_election/connectivity task/test_cephadm_timeout} | 1 | |
pass | 7669055 | 2024-04-22 22:09:42 | 2024-04-23 01:04:15 | 2024-04-23 01:31:34 | 0:27:19 | 0:20:16 | 0:07:03 | smithi | main | rhel | 8.6 | orch/cephadm/smoke-roleless/{0-distro/rhel_8.6_container_tools_rhel8 0-nvme-loop 1-start 2-services/nfs-haproxy-proto 3-final} | 2 | |
pass | 7669012 | 2024-04-22 22:09:01 | 2024-04-23 00:37:03 | 2024-04-23 01:04:22 | 0:27:19 | 0:20:23 | 0:06:56 | smithi | main | rhel | 8.6 | orch/cephadm/smoke-roleless/{0-distro/rhel_8.6_container_tools_rhel8 0-nvme-loop 1-start 2-services/nfs 3-final} | 2 | |
pass | 7668975 | 2024-04-22 21:33:05 | 2024-04-23 00:10:05 | 2024-04-23 00:36:54 | 0:26:49 | 0:15:53 | 0:10:56 | smithi | main | ubuntu | 22.04 | powercycle/osd/{clusters/3osd-1per-target ignorelist_health objectstore/bluestore-stupid powercycle/default supported-distros/ubuntu_latest tasks/cfuse_workunit_suites_fsync thrashosds-health} | 4 | |
pass | 7668940 | 2024-04-22 21:32:31 | 2024-04-22 22:37:30 | 2024-04-22 23:10:32 | 0:33:02 | 0:24:11 | 0:08:51 | smithi | main | centos | 9.stream | powercycle/osd/{clusters/3osd-1per-target ignorelist_health objectstore/bluestore-comp-zstd powercycle/default supported-distros/centos_latest tasks/snaps-many-objects thrashosds-health} | 4 | |
fail | 7668872 | 2024-04-22 21:10:58 | 2024-04-22 22:06:00 | 2024-04-22 22:25:01 | 0:19:01 | 0:07:25 | 0:11:36 | smithi | main | ubuntu | 22.04 | orch/cephadm/smoke/{0-distro/ubuntu_22.04 0-nvme-loop agent/off fixed-2 mon_election/connectivity start} | 2 | |
Failure Reason:
Command failed on smithi073 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:430e09df97c8fc7dc2b2ae424f68ed11366c540f pull' |
||||||||||||||
fail | 7668838 | 2024-04-22 21:10:23 | 2024-04-22 21:50:25 | 2024-04-22 22:04:42 | 0:14:17 | 0:06:40 | 0:07:37 | smithi | main | centos | 9.stream | orch/cephadm/thrash/{0-distro/centos_9.stream_runc 1-start 2-thrash 3-tasks/snaps-few-objects fixed-2 msgr/async root} | 2 | |
Failure Reason:
Command failed on smithi046 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:430e09df97c8fc7dc2b2ae424f68ed11366c540f pull' |
||||||||||||||
fail | 7668801 | 2024-04-22 21:09:45 | 2024-04-22 21:34:49 | 2024-04-22 21:52:04 | 0:17:15 | 0:06:58 | 0:10:17 | smithi | main | centos | 9.stream | orch/cephadm/no-agent-workunits/{0-distro/centos_9.stream mon_election/connectivity task/test_orch_cli_mon} | 5 | |
Failure Reason:
Command failed on smithi042 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:430e09df97c8fc7dc2b2ae424f68ed11366c540f pull' |
||||||||||||||
pass | 7668733 | 2024-04-22 20:13:18 | 2024-04-23 03:07:24 | 2024-04-23 03:42:23 | 0:34:59 | 0:23:53 | 0:11:06 | smithi | main | ubuntu | 22.04 | orch/cephadm/smoke-roleless/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-services/nfs-ingress2 3-final} | 2 | |
pass | 7668699 | 2024-04-22 20:12:47 | 2024-04-23 02:47:37 | 2024-04-23 03:08:35 | 0:20:58 | 0:14:10 | 0:06:48 | smithi | main | centos | 9.stream | orch/cephadm/osds/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-ops/deploy-raw} | 2 | |
fail | 7668656 | 2024-04-22 20:12:06 | 2024-04-23 02:21:06 | 2024-04-23 02:43:45 | 0:22:39 | 0:12:53 | 0:09:46 | smithi | main | centos | 9.stream | orch/cephadm/workunits/{0-distro/centos_9.stream_runc agent/on mon_election/classic task/test_iscsi_container/{centos_9.stream test_iscsi_container}} | 1 | |
Failure Reason:
"2024-04-23T02:37:14.249761+0000 mon.a (mon.0) 177 : cluster [WRN] Health check failed: 1 failed cephadm daemon(s) (CEPHADM_FAILED_DAEMON)" in cluster log |
||||||||||||||
fail | 7668588 | 2024-04-22 20:11:01 | 2024-04-22 20:29:05 | 2024-04-22 21:34:12 | 1:05:07 | 0:56:24 | 0:08:43 | smithi | main | centos | 9.stream | orch/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mgr mon osd} fail_fs/yes overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn pg_health syntax} roles tasks/{0-from/quincy 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-inline/yes 4-verify} 2-client/kclient 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} | 2 | |
Failure Reason:
Command failed on smithi042 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:quincy shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 92a6a74a-00e8-11ef-bc93-c7b262605968 -e sha1=56e81df3aeb98b717efe2ab0537a8c60249f95f7 -- bash -c \'ceph versions | jq -e \'"\'"\'.mgr | keys\'"\'"\' | grep $sha1\'' |
||||||||||||||
pass | 7668570 | 2024-04-22 20:10:45 | 2024-04-22 20:12:06 | 2024-04-22 20:29:04 | 0:16:58 | 0:11:15 | 0:05:43 | smithi | main | centos | 9.stream | orch/cephadm/orchestrator_cli/{0-random-distro$/{centos_9.stream} 2-node-mgr agent/off orchestrator_cli} | 2 | |
pass | 7668528 | 2024-04-22 19:32:09 | 2024-04-22 19:32:50 | 2024-04-22 20:02:03 | 0:29:13 | 0:14:53 | 0:14:20 | smithi | main | ubuntu | 22.04 | rgw/multifs/{clusters/fixed-2 frontend/beast ignore-pg-availability objectstore/bluestore-bitmap overrides rgw_pool_type/ec-profile s3tests-branch tasks/rgw_user_quota ubuntu_latest} | 2 | |
pass | 7668409 | 2024-04-22 14:52:27 | 2024-04-22 15:35:18 | 2024-04-22 16:02:51 | 0:27:33 | 0:17:44 | 0:09:49 | smithi | main | centos | 9.stream | orch:cephadm/workunits/{0-distro/centos_9.stream_runc agent/off mon_election/classic task/test_monitoring_stack_basic} | 3 | |
pass | 7668354 | 2024-04-22 14:51:30 | 2024-04-22 14:52:32 | 2024-04-22 15:36:54 | 0:44:22 | 0:34:45 | 0:09:37 | smithi | main | centos | 9.stream | orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mgr mon osd} fail_fs/no overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn pg_health syntax} roles tasks/{0-from/quincy 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-inline/no 4-verify} 2-client/kclient 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} | 2 | |
pass | 7668274 | 2024-04-22 12:37:37 | 2024-04-22 13:06:19 | 2024-04-22 13:51:15 | 0:44:56 | 0:36:09 | 0:08:47 | smithi | main | centos | 9.stream | fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/stock/{centos_9.stream k-stock} ms-die-on-skipped}} ms_mode/secure wsync/no} objectstore-ec/bluestore-comp omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/random export-check n/3 replication/default} standby-replay tasks/{0-subvolume/{with-namespace-isolated-and-quota} 1-check-counter 2-scrub/yes 3-snaps/yes 4-flush/yes 5-quiesce/with-quiesce 6-workunit/suites/iogen}} | 3 |