Name Machine Type Up Locked Locked Since Locked By OS Type OS Version Arch Description
smithi081.front.sepia.ceph.com smithi True True 2024-04-23 20:19:20.048085 scheduled_teuthology@teuthology ubuntu 22.04 x86_64 /home/teuthworker/archive/teuthology-2024-04-23_20:16:13-rbd-main-distro-default-smithi/7670141
Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
running 7670141 2024-04-23 20:18:11 2024-04-23 20:19:19 2024-04-23 20:39:35 0:21:09 smithi main ubuntu 22.04 rbd/thrash/{base/install clusters/{fixed-2 openstack} conf/{disable-pool-app} msgr-failures/few objectstore/bluestore-hybrid supported-random-distro$/{ubuntu_latest} thrashers/default thrashosds-health workloads/rbd_fsx_cache_writethrough} 2
pass 7670094 2024-04-23 17:45:30 2024-04-23 17:51:58 2024-04-23 18:17:35 0:25:37 0:16:20 0:09:17 smithi main ubuntu 22.04 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/1a5s-mds-1c-client conf/{client mds mgr mon osd} distro/{ubuntu_latest} mount/fuse msgr-failures/none objectstore-ec/bluestore-comp overrides/{client-shutdown frag ignorelist_health ignorelist_wrongly_marked_down pg_health prefetch_dirfrags/no prefetch_dirfrags/yes prefetch_entire_dirfrags/no prefetch_entire_dirfrags/yes races session_timeout thrashosds-health} ranks/5 tasks/{1-thrash/with-quiesce 2-workunit/suites/pjd}} 2
pass 7670060 2024-04-23 17:19:14 2024-04-23 18:16:16 2024-04-23 19:22:31 1:06:15 0:57:04 0:09:11 smithi main centos 9.stream rgw/verify/{0-install accounts$/{main} clusters/fixed-2 datacache/rgw-datacache frontend/beast ignore-pg-availability inline-data$/{on} msgr-failures/few objectstore/bluestore-bitmap overrides proto/http rgw_pool_type/replicated s3tests-branch sharding$/{single} striping$/{stripe-greater-than-chunk} supported-random-distro$/{centos_latest} tasks/{bucket-check cls mp_reupload ragweed reshard s3tests-java s3tests versioning} validater/valgrind} 2
fail 7669693 2024-04-23 14:17:07 2024-04-23 15:32:10 2024-04-23 17:46:25 2:14:15 2:03:18 0:10:57 smithi main ubuntu 22.04 rados/upgrade/parallel/{0-random-distro$/{ubuntu_22.04} 0-start 1-tasks mon_election/classic upgrade-sequence workload/{ec-rados-default rados_api rados_loadgenbig rbd_import_export test_rbd_api test_rbd_python}} 2
Failure Reason:

"2024-04-23T16:09:21.164341+0000 mon.a (mon.0) 18 : cluster [ERR] Health check failed: 8 osds(s) are not reachable (OSD_UNREACHABLE)" in cluster log

fail 7669652 2024-04-23 14:16:24 2024-04-23 15:08:41 2024-04-23 15:19:57 0:11:16 0:04:51 0:06:25 smithi main centos 9.stream rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-low-osd-mem-target rados tasks/rados_cls_all validater/valgrind} 2
Failure Reason:

Command failed on smithi136 with status 1: 'sudo yum -y install ceph-mgr-dashboard'

pass 7669570 2024-04-23 14:04:48 2024-04-23 14:05:35 2024-04-23 15:08:35 1:03:00 0:41:45 0:21:15 smithi main centos 9.stream fs/upgrade/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mgr mon osd} fail_fs/no overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn pg_health syntax} roles tasks/{0-from/quincy 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-inline/no 4-verify} 2-client/kclient 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} 2
fail 7669514 2024-04-23 09:50:10 2024-04-23 09:51:12 2024-04-23 10:36:18 0:45:06 0:36:24 0:08:42 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/stock/{centos_9.stream k-stock} ms-die-on-skipped}} ms_mode/crc wsync/no} objectstore-ec/bluestore-bitmap omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/automatic export-check n/3 replication/always} standby-replay tasks/{0-subvolume/{with-no-extra-options} 1-check-counter 2-scrub/no 3-snaps/yes 4-flush/yes 5-quiesce/with-quiesce 6-workunit/suites/dbench}} 3
Failure Reason:

Command failed (workunit test suites/dbench.sh) on smithi016 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=0a6c3ed699031b80a2b419e7e795368719871394 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/dbench.sh'

pass 7669479 2024-04-23 05:01:27 2024-04-23 05:01:27 2024-04-23 05:42:31 0:41:04 0:30:14 0:10:50 smithi main ubuntu 22.04 smoke/basic/{clusters/{fixed-3-cephfs openstack} objectstore/bluestore-bitmap supported-random-distro$/{ubuntu_latest} tasks/{0-install test/rados_api_tests}} 3
fail 7669259 2024-04-22 22:47:15 2024-04-22 23:39:18 2024-04-23 00:18:21 0:39:03 0:27:38 0:11:25 smithi main ubuntu 22.04 orch:cephadm/smoke-roleless/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-services/nfs2 3-final} 2
Failure Reason:

"2024-04-22T23:59:16.851941+0000 mon.smithi081 (mon.0) 121 : cluster [WRN] Health check failed: failed to probe daemons or devices (CEPHADM_REFRESH_FAILED)" in cluster log

fail 7669196 2024-04-22 22:46:08 2024-04-22 23:07:00 2024-04-22 23:31:30 0:24:30 0:13:43 0:10:47 smithi main ubuntu 22.04 orch:cephadm/with-work/{0-distro/ubuntu_22.04 fixed-2 mode/packaged mon_election/classic msgr/async-v1only start tasks/rotate-keys} 2
Failure Reason:

"2024-04-22T23:28:38.806256+0000 mon.a (mon.0) 104 : cluster [WRN] Health check failed: failed to probe daemons or devices (CEPHADM_REFRESH_FAILED)" in cluster log

fail 7669173 2024-04-22 22:45:45 2024-04-22 22:52:39 2024-04-22 23:07:25 0:14:46 0:08:33 0:06:13 smithi main centos 9.stream orch:cephadm/no-agent-workunits/{0-distro/centos_9.stream mon_election/classic task/test_orch_cli} 1
Failure Reason:

Command failed on smithi081 with status 2: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:43be020184947e53516056c9931e1ac5bdbbb1a5 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 958663d8-00fc-11ef-bc93-c7b262605968 -- ceph-volume lvm zap /dev/vg_nvme/lv_4'

fail 7669116 2024-04-22 22:10:39 2024-04-23 01:50:41 2024-04-23 02:19:50 0:29:09 0:21:36 0:07:33 smithi main rhel 8.6 orch/cephadm/workunits/{0-distro/rhel_8.6_container_tools_rhel8 agent/on mon_election/classic task/test_cephadm} 1
Failure Reason:

Command failed (workunit test cephadm/test_cephadm.sh) on smithi081 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=f26c9c3fee57ca330501910de5a07c8769ef5dfc TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_cephadm.sh'

pass 7669081 2024-04-22 22:10:06 2024-04-23 01:20:37 2024-04-23 01:50:36 0:29:59 0:23:08 0:06:51 smithi main rhel 8.6 orch/cephadm/no-agent-workunits/{0-distro/rhel_8.6_container_tools_rhel8 mon_election/connectivity task/test_orch_cli} 1
pass 7669053 2024-04-22 22:09:40 2024-04-23 01:04:14 2024-04-23 01:21:22 0:17:08 0:08:09 0:08:59 smithi main centos 8.stream orch/cephadm/workunits/{0-distro/centos_8.stream_container_tools_crun agent/off mon_election/connectivity task/test_cephadm_repos} 1
pass 7668992 2024-04-22 22:08:43 2024-04-23 00:24:53 2024-04-23 01:04:04 0:39:11 0:27:10 0:12:01 smithi main centos 8.stream orch/cephadm/no-agent-workunits/{0-distro/centos_8.stream_container_tools_crun mon_election/classic task/test_orch_cli_mon} 5
fail 7668893 2024-04-22 21:11:20 2024-04-22 22:21:50 2024-04-22 22:40:14 0:18:24 0:08:13 0:10:11 smithi main ubuntu 22.04 orch/cephadm/osds/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-ops/rm-zap-add} 2
Failure Reason:

Command failed on smithi007 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:430e09df97c8fc7dc2b2ae424f68ed11366c540f pull'

fail 7668823 2024-04-22 21:10:08 2024-04-22 21:50:20 2024-04-22 22:06:42 0:16:22 0:06:18 0:10:04 smithi main centos 9.stream orch/cephadm/thrash/{0-distro/centos_9.stream 1-start 2-thrash 3-tasks/small-objects fixed-2 msgr/async-v2only root} 2
Failure Reason:

Command failed on smithi081 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:430e09df97c8fc7dc2b2ae424f68ed11366c540f pull'

fail 7668782 2024-04-22 21:09:26 2024-04-22 21:34:42 2024-04-22 21:47:33 0:12:51 0:06:07 0:06:44 smithi main centos 9.stream orch/cephadm/workunits/{0-distro/centos_9.stream agent/off mon_election/classic task/test_extra_daemon_features} 2
Failure Reason:

Command failed on smithi081 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:430e09df97c8fc7dc2b2ae424f68ed11366c540f pull'

fail 7668703 2024-04-22 20:12:50 2024-04-23 02:50:29 2024-04-23 03:34:23 0:43:54 0:35:10 0:08:44 smithi main centos 9.stream orch/cephadm/with-work/{0-distro/centos_9.stream_runc fixed-2 mode/root mon_election/classic msgr/async-v2only start tasks/rados_api_tests} 2
Failure Reason:

"2024-04-23T03:13:50.889301+0000 mon.a (mon.0) 1204 : cluster [WRN] Health check failed: 1 stray daemon(s) not managed by cephadm (CEPHADM_STRAY_DAEMON)" in cluster log

pass 7668672 2024-04-22 20:12:22 2024-04-23 02:31:04 2024-04-23 02:51:15 0:20:11 0:11:25 0:08:46 smithi main centos 9.stream orch/cephadm/smoke-small/{0-distro/centos_9.stream_runc 0-nvme-loop agent/off fixed-2 mon_election/classic start} 3