Name | Machine Type | Up | Locked | Locked Since | Locked By | OS Type | OS Version | Arch | Description |
---|---|---|---|---|---|---|---|---|---|
smithi099.front.sepia.ceph.com | smithi | True | True | 2024-05-14 07:13:37.894007 | scheduled_gabrioux@teuthology | centos | 9 | x86_64 | /home/teuthworker/archive/gabrioux-2024-05-14_05:57:55-orch:cephadm-wip-guits-testing-2024-05-13-1110-distro-default-smithi/7705562 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
running | 7705562 | 2024-05-14 05:58:48 | 2024-05-14 07:13:37 | 2024-05-14 08:12:40 | 1:00:42 | smithi | main | centos | 9.stream | orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mgr mon osd} fail_fs/yes overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn pg_health syntax} roles tasks/{0-from/reef/{v18.2.0} 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-inline/yes 4-verify} 2-client/fuse 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} | 2 | |||
pass | 7705543 | 2024-05-14 05:58:23 | 2024-05-14 06:49:07 | 2024-05-14 07:10:31 | 0:21:24 | 0:11:55 | 0:09:29 | smithi | main | centos | 9.stream | orch:cephadm/orchestrator_cli/{0-random-distro$/{centos_9.stream_runc} 2-node-mgr agent/off orchestrator_cli} | 2 | |
fail | 7705508 | 2024-05-14 05:20:52 | 2024-05-14 05:45:41 | 2024-05-14 06:50:53 | 1:05:12 | 0:53:50 | 0:11:22 | smithi | main | centos | 9.stream | fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/1a5s-mds-1c-client conf/{client mds mgr mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/stock/{centos_9.stream k-stock} ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-ec-root overrides/{client-shutdown frag ignorelist_health ignorelist_wrongly_marked_down pg_health prefetch_dirfrags/no prefetch_dirfrags/yes prefetch_entire_dirfrags/no prefetch_entire_dirfrags/yes races session_timeout thrashosds-health} ranks/1 tasks/{1-thrash/with-quiesce 2-workunit/fs/snaps}} | 2 | |
Failure Reason:
Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi099 with status 135: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=c239949782f7d36fe6af709486da2555a000baeb TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/fs/snaps/snaptest-git-ceph.sh' |
||||||||||||||
fail | 7705423 | 2024-05-14 00:33:45 | 2024-05-14 01:23:46 | 2024-05-14 01:58:48 | 0:35:02 | 0:25:38 | 0:09:24 | smithi | main | centos | 8.stream | rados/thrash-old-clients/{0-distro$/{centos_8.stream_container_tools} 0-size-min-size-overrides/3-size-2-min-size 1-install/nautilus-v2only backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/on mon_election/connectivity msgr-failures/osd-delay rados thrashers/morepggrow thrashosds-health workloads/rbd_cls} | 3 | |
Failure Reason:
"2024-05-14T01:44:33.908489+0000 mon.a (mon.0) 209 : cluster [WRN] Health detail: HEALTH_WARN 1/3 mons down, quorum a,c" in cluster log |
||||||||||||||
fail | 7705359 | 2024-05-14 00:31:20 | 2024-05-14 00:33:00 | 2024-05-14 01:20:10 | 0:47:10 | 0:37:53 | 0:09:17 | smithi | main | centos | 9.stream | fs/upgrade/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mgr mon osd} fail_fs/yes overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn syntax} roles tasks/{0-from/quincy 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-inline/yes 4-verify} 2-client/fuse 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} | 2 | |
Failure Reason:
reached maximum tries (51) after waiting for 300 seconds |
||||||||||||||
pass | 7705304 | 2024-05-13 22:10:47 | 2024-05-14 03:06:06 | 2024-05-14 04:06:25 | 1:00:19 | 0:48:55 | 0:11:24 | smithi | main | centos | 8.stream | orch/cephadm/upgrade/{1-start-distro/1-start-centos_8.stream_container-tools 2-repo_digest/repo_digest 3-upgrade/staggered 4-wait 5-upgrade-ls agent/off mon_election/classic} | 2 | |
pass | 7705251 | 2024-05-13 22:09:58 | 2024-05-14 02:27:41 | 2024-05-14 03:09:01 | 0:41:20 | 0:31:19 | 0:10:01 | smithi | main | centos | 8.stream | orch/cephadm/upgrade/{1-start-distro/1-start-centos_8.stream_container-tools 2-repo_digest/repo_digest 3-upgrade/simple 4-wait 5-upgrade-ls agent/on mon_election/classic} | 2 | |
pass | 7705230 | 2024-05-13 22:09:38 | 2024-05-14 02:10:01 | 2024-05-14 02:26:51 | 0:16:50 | 0:08:03 | 0:08:47 | smithi | main | centos | 8.stream | orch/cephadm/workunits/{0-distro/centos_8.stream_container_tools_crun agent/off mon_election/classic task/test_cephadm_repos} | 1 | |
pass | 7705156 | 2024-05-13 21:33:08 | 2024-05-14 00:06:32 | 2024-05-14 00:30:11 | 0:23:39 | 0:14:01 | 0:09:38 | smithi | main | centos | 9.stream | powercycle/osd/{clusters/3osd-1per-target ignorelist_health objectstore/bluestore-comp-zstd powercycle/default supported-distros/centos_latest tasks/readwrite thrashosds-health} | 4 | |
pass | 7705131 | 2024-05-13 21:32:44 | 2024-05-13 23:38:16 | 2024-05-14 00:06:57 | 0:28:41 | 0:16:37 | 0:12:04 | smithi | main | ubuntu | 22.04 | powercycle/osd/{clusters/3osd-1per-target ignorelist_health objectstore/bluestore-comp-zlib powercycle/default supported-distros/ubuntu_latest tasks/admin_socket_objecter_requests thrashosds-health} | 4 | |
pass | 7705107 | 2024-05-13 21:32:20 | 2024-05-13 23:12:42 | 2024-05-13 23:41:16 | 0:28:34 | 0:18:23 | 0:10:11 | smithi | main | ubuntu | 22.04 | powercycle/osd/{clusters/3osd-1per-target ignorelist_health objectstore/bluestore-comp-zlib powercycle/default supported-distros/ubuntu_latest tasks/cfuse_workunit_suites_fsstress thrashosds-health} | 4 | |
pass | 7705058 | 2024-05-13 21:11:03 | 2024-05-13 22:46:23 | 2024-05-13 23:13:27 | 0:27:04 | 0:16:12 | 0:10:52 | smithi | main | centos | 9.stream | orch/cephadm/smoke-roleless/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-services/rgw 3-final} | 2 | |
pass | 7705010 | 2024-05-13 21:10:14 | 2024-05-13 22:13:40 | 2024-05-13 22:46:24 | 0:32:44 | 0:18:57 | 0:13:47 | smithi | main | centos | 9.stream | orch/cephadm/smoke-roleless/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-services/jaeger 3-final} | 2 | |
pass | 7704955 | 2024-05-13 21:09:23 | 2024-05-13 21:31:30 | 2024-05-13 22:12:07 | 0:40:37 | 0:26:29 | 0:14:08 | smithi | main | centos | 9.stream | orch/cephadm/with-work/{0-distro/centos_9.stream_runc fixed-2 mode/root mon_election/classic msgr/async start tasks/rotate-keys} | 2 | |
fail | 7704650 | 2024-05-13 07:43:31 | 2024-05-13 08:42:20 | 2024-05-13 09:17:05 | 0:34:45 | 0:22:55 | 0:11:50 | smithi | main | ubuntu | 22.04 | orch:cephadm/workunits/{0-distro/ubuntu_22.04 agent/off mon_election/classic task/test_host_drain} | 3 | |
Failure Reason:
"2024-05-13T09:13:58.868972+0000 mon.a (mon.0) 718 : cluster [WRN] Health check failed: 1 stray host(s) with 1 daemon(s) not managed by cephadm (CEPHADM_STRAY_HOST)" in cluster log |
||||||||||||||
pass | 7704612 | 2024-05-13 07:42:39 | 2024-05-13 08:17:42 | 2024-05-13 08:43:18 | 0:25:36 | 0:15:25 | 0:10:11 | smithi | main | centos | 9.stream | orch:cephadm/smoke-roleless/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-services/rgw 3-final} | 2 | |
pass | 7704580 | 2024-05-13 07:41:55 | 2024-05-13 07:48:21 | 2024-05-13 08:17:41 | 0:29:20 | 0:19:23 | 0:09:57 | smithi | main | ubuntu | 22.04 | orch:cephadm/smb/{0-distro/ubuntu_22.04 tasks/deploy_smb_mgr_domain} | 2 | |
fail | 7704543 | 2024-05-13 05:54:40 | 2024-05-13 06:10:27 | 2024-05-13 07:46:57 | 1:36:30 | 1:24:22 | 0:12:08 | smithi | main | centos | 9.stream | fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/1a5s-mds-1c-client conf/{client mds mgr mon osd} distro/{centos_latest} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/stock/{centos_9.stream k-stock} ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-ec-root overrides/{client-shutdown frag ignorelist_health ignorelist_wrongly_marked_down pg_health prefetch_dirfrags/no prefetch_dirfrags/yes prefetch_entire_dirfrags/no prefetch_entire_dirfrags/yes races session_timeout thrashosds-health} ranks/1 tasks/{1-thrash/with-quiesce 2-workunit/fs/snaps}} | 2 | |
Failure Reason:
Command failed (workunit test fs/snaps/untar_snap_rm.sh) on smithi077 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=3ee2ba724b88bb242428fcf88b7dc576e740e26d TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/fs/snaps/untar_snap_rm.sh' |
||||||||||||||
fail | 7704498 | 2024-05-13 00:24:52 | 2024-05-13 02:47:25 | 2024-05-13 03:32:33 | 0:45:08 | 0:25:44 | 0:19:24 | smithi | main | centos | 8.stream | upgrade:octopus-x/rgw-multisite/{clusters frontend overrides realm tasks upgrade/secondary} | 2 | |
Failure Reason:
An attempt to upgrade from a higher version to a lower one will always fail. Hint: check tags in the target git branch. |
||||||||||||||
pass | 7704410 | 2024-05-12 22:05:44 | 2024-05-13 15:02:39 | 2024-05-13 15:33:48 | 0:31:09 | 0:24:09 | 0:07:00 | smithi | main | rhel | 8.6 | rados/singleton-nomsgr/{all/librados_hello_world mon_election/connectivity rados supported-random-distro$/{rhel_8}} | 1 |