Name Machine Type Up Locked Locked Since Locked By OS Type OS Version Arch Description
smithi189.front.sepia.ceph.com smithi True True 2024-04-25 13:34:56.757494 scheduled_pdonnell@teuthology centos 9 x86_64 /home/teuthworker/archive/pdonnell-2024-04-25_10:00:23-fs-wip-pdonnell-testing-20240425.015853-debug-distro-default-smithi/7673175
Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
fail 7673434 2024-04-25 12:13:50 2024-04-25 12:16:22 2024-04-25 13:03:35 0:47:13 0:37:26 0:09:47 smithi main centos 9.stream rbd:nvmeof/{base/install centos_latest conf/{disable-pool-app} workloads/nvmeof_thrash} 4
Failure Reason:

"2024-04-25T12:51:28.584478+0000 mon.a (mon.0) 33 : cluster [WRN] Health detail: HEALTH_WARN 2 failed cephadm daemon(s)" in cluster log

fail 7673175 2024-04-25 10:02:46 2024-04-25 13:19:34 2024-04-25 14:08:16 0:48:42 0:25:38 0:23:04 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/stock/{centos_9.stream k-stock} ms-die-on-skipped}} ms_mode/crc wsync/no} objectstore-ec/bluestore-bitmap omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/automatic export-check n/5 replication/default} standby-replay tasks/{0-subvolume/{with-namespace-isolated-and-quota} 1-check-counter 2-scrub/yes 3-snaps/no 4-flush/yes 5-workunit/suites/fsx}} 3
Failure Reason:

error during scrub thrashing: rank damage found: {'backtrace'}

pass 7673153 2024-04-25 10:02:23 2024-04-25 13:04:03 2024-04-25 13:29:53 0:25:50 0:13:57 0:11:53 smithi main ubuntu 22.04 fs/volumes/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/1a3s-mds-4c-client conf/{client mds mgr mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile pg_health} tasks/volumes/{overrides test/finisher_per_module}} 2
fail 7673102 2024-04-25 10:01:30 2024-04-25 10:37:11 2024-04-25 12:04:32 1:27:21 1:17:23 0:09:58 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/stock/{centos_9.stream k-stock} ms-die-on-skipped}} ms_mode/secure wsync/no} objectstore-ec/bluestore-comp omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/random export-check n/5 replication/default} standby-replay tasks/{0-subvolume/{with-namespace-isolated-and-quota} 1-check-counter 2-scrub/yes 3-snaps/yes 4-flush/yes 5-workunit/kernel_untar_build}} 3
Failure Reason:

error during scrub thrashing: Command failed on smithi027 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph tell mds.1:4 damage ls'

pass 7673080 2024-04-25 10:01:07 2024-04-25 10:18:46 2024-04-25 10:40:01 0:21:15 0:11:31 0:09:44 smithi main centos 9.stream fs/32bits/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/fixed-2-ucephfs conf/{client mds mgr mon osd} distro/{centos_latest} mount/fuse objectstore-ec/bluestore-bitmap overrides/{faked-ino ignorelist_health ignorelist_wrongly_marked_down pg_health} tasks/cfuse_workunit_suites_fsstress} 2
pass 7672848 2024-04-25 05:02:11 2024-04-25 05:33:43 2024-04-25 06:01:59 0:28:16 0:20:30 0:07:46 smithi main centos 9.stream smoke/basic/{clusters/{fixed-3-cephfs openstack} objectstore/bluestore-bitmap supported-random-distro$/{centos_latest} tasks/{0-install test/rados_cache_snaps}} 3
fail 7672762 2024-04-25 03:54:29 2024-04-25 04:21:16 2024-04-25 05:18:41 0:57:25 0:45:58 0:11:27 smithi main ubuntu 22.04 orch:cephadm/thrash/{0-distro/ubuntu_22.04 1-start 2-thrash 3-tasks/rados_api_tests fixed-2 msgr/async root} 2
Failure Reason:

"2024-04-25T04:57:09.465013+0000 mon.a (mon.0) 1047 : cluster [WRN] Health check failed: 2 stray daemon(s) ['stray daemon laundry.pid70383 on host smithi027 not managed by cephadm', 'stray daemon laundry.pid70417 on host smithi027 not managed by cephadm'] not managed by cephadm (CEPHADM_STRAY_DAEMON)" in cluster log

pass 7672741 2024-04-25 03:54:07 2024-04-25 04:02:49 2024-04-25 04:21:17 0:18:28 0:11:00 0:07:28 smithi main centos 9.stream orch:cephadm/orchestrator_cli/{0-random-distro$/{centos_9.stream} 2-node-mgr agent/off orchestrator_cli} 2
pass 7672471 2024-04-24 21:49:30 2024-04-25 03:27:49 2024-04-25 04:02:42 0:34:53 0:17:10 0:17:43 smithi main centos 8.stream krbd/rbd-nomount/{bluestore-bitmap clusters/fixed-3 conf install/ceph ms_mode/legacy$/{legacy-rxbounce} msgr-failures/few tasks/krbd_namespaces} 3
pass 7672430 2024-04-24 21:48:48 2024-04-25 02:42:47 2024-04-25 03:28:14 0:45:27 0:24:12 0:21:15 smithi main centos 8.stream krbd/fsx/{ceph/ceph clusters/3-node conf features/no-object-map ms_mode$/{crc-rxbounce} objectstore/bluestore-bitmap striping/default/{msgr-failures/many randomized-striping-off} tasks/fsx-1-client} 3
pass 7672404 2024-04-24 21:48:22 2024-04-25 02:18:54 2024-04-25 02:47:53 0:28:59 0:16:27 0:12:32 smithi main centos 8.stream krbd/ms_modeless/{bluestore-bitmap ceph/ceph clusters/fixed-3 conf tasks/krbd_default_map_options} 3
pass 7672374 2024-04-24 21:28:38 2024-04-25 01:54:24 2024-04-25 02:20:33 0:26:09 0:17:44 0:08:25 smithi main centos 9.stream fs/functional/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/1a3s-mds-4c-client conf/{client mds mgr mon osd} distro/{centos_latest} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/stock/{centos_9.stream k-stock} ms-die-on-skipped}} objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile pg_health} subvol_versions/create_subvol_version_v1 tasks/forward-scrub} 2
pass 7672346 2024-04-24 21:28:09 2024-04-25 01:34:40 2024-04-25 01:55:19 0:20:39 0:10:06 0:10:33 smithi main centos 9.stream fs/permission/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/fixed-2-ucephfs conf/{client mds mgr mon osd} distro/{centos_latest} mount/fuse objectstore-ec/bluestore-comp overrides/{ignorelist_health ignorelist_wrongly_marked_down pg_health} tasks/cfuse_workunit_suites_pjd} 2
pass 7672317 2024-04-24 21:27:40 2024-04-25 01:08:24 2024-04-25 01:38:08 0:29:44 0:19:54 0:09:50 smithi main centos 8.stream fs/upgrade/featureful_client/upgraded_client/{bluestore-bitmap centos_8.stream clusters/1-mds-2-client-micro conf/{client mds mgr mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down multimds/no pg-warn} tasks/{0-octopus 1-client 2-upgrade 3-client-upgrade 4-compat_client 5-client-sanity}} 3
dead 7671326 2024-04-24 11:41:57 2024-04-24 12:59:33 2024-04-25 01:10:46 12:11:13 smithi main centos 9.stream orch:cephadm/smoke-roleless/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-services/nfs-ingress 3-final} 2
Failure Reason:

hit max job timeout

fail 7671261 2024-04-24 09:26:34 2024-04-24 09:52:50 2024-04-24 10:33:02 0:40:12 0:33:17 0:06:55 smithi main centos 9.stream rbd/iscsi/{0-single-container-host base/install cluster/{fixed-3 openstack} conf/{disable-pool-app} workloads/cephadm_iscsi} 3
Failure Reason:

"2024-04-24T10:08:03.444774+0000 mon.a (mon.0) 205 : cluster [WRN] Health check failed: 1/3 mons down, quorum a,c (MON_DOWN)" in cluster log

dead 7671260 2024-04-24 09:26:34 2024-04-24 09:52:49 2024-04-24 09:53:54 0:01:05 smithi main centos 9.stream rbd/iscsi/{0-single-container-host base/install cluster/{fixed-3 openstack} conf/{disable-pool-app} workloads/cephadm_iscsi} 3
Failure Reason:

Error reimaging machines: Failed to power on smithi145

fail 7671216 2024-04-24 07:33:11 2024-04-24 08:04:31 2024-04-24 09:49:08 1:44:37 1:19:44 0:24:53 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/fuse objectstore-ec/bluestore-ec-root omap_limit/10 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/automatic export-check n/5 replication/default} standby-replay tasks/{0-subvolume/{with-no-extra-options} 1-check-counter 2-scrub/yes 3-snaps/yes 4-flush/yes 5-workunit/suites/dbench}} 3
Failure Reason:

"2024-04-24T08:55:08.800397+0000 mds.b (mds.0) 34 : cluster [WRN] Scrub error on inode 0x10000000215 (/volumes/qa/sv_0/e395bed6-5bba-41b8-a256-3505e4afcca2/client.0/tmp/clients/client0/~dmtmp/COREL) see mds.b log and `damage ls` output for details" in cluster log

fail 7671197 2024-04-24 07:32:47 2024-04-24 07:34:18 2024-04-24 08:13:29 0:39:11 0:28:23 0:10:48 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/secure wsync/no} objectstore-ec/bluestore-comp omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/automatic export-check n/3 replication/always} standby-replay tasks/{0-subvolume/{with-namespace-isolated-and-quota} 1-check-counter 2-scrub/yes 3-snaps/no 4-flush/no 5-workunit/suites/fsx}} 3
Failure Reason:

Command failed (workunit test suites/fsx.sh) on smithi059 with status 2: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=1f81fb397ae98da7563d451a78c61574c8f4e6e0 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/fsx.sh'

pass 7671043 2024-04-24 01:17:00 2024-04-24 10:39:15 2024-04-24 13:02:21 2:23:06 2:15:17 0:07:49 smithi main rhel 8.6 upgrade:quincy-x/stress-split/{0-distro/rhel_8.6_container_tools_rhel8 0-roles 1-start 2-first-half-tasks/rbd-import-export 3-stress-tasks/{radosbench rbd-cls rbd-import-export rbd_api readwrite snaps-few-objects} 4-second-half-tasks/radosbench mon_election/connectivity} 2