Name Machine Type Up Locked Locked Since Locked By OS Type OS Version Arch Description
smithi098.front.sepia.ceph.com smithi True True 2024-05-09 16:21:36.235422 scheduled_teuthology@teuthology centos 8 x86_64 /home/teuthworker/archive/teuthology-2024-05-01_01:16:02-upgrade:quincy-x-reef-distro-default-smithi/7682444
Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
pass 7699702 2024-05-09 15:36:35 2024-05-09 15:41:30 2024-05-09 16:21:33 0:40:03 0:24:35 0:15:28 smithi main ubuntu 22.04 rgw:notifications/{beast bluestore-bitmap fixed-2 ignore-pg-availability overrides tasks/kafka/{0-install supported-distros/{ubuntu_latest} test_kafka}} 2
pass 7699683 2024-05-09 10:23:30 2024-05-09 10:33:26 2024-05-09 10:52:31 0:19:05 0:08:32 0:10:33 smithi main centos 9.stream fs:volumes/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{centos_latest} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/stock/{centos_9.stream k-stock} ms-die-on-skipped}} objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/volumes/{overrides test/finisher_per_module}} 2
fail 7699620 2024-05-09 09:15:36 2024-05-09 09:18:52 2024-05-09 10:33:26 1:14:34 1:04:41 0:09:53 smithi main ubuntu 22.04 fs/verify/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/1a5s-mds-1c-client conf/{client mds mgr mon osd} distro/{ubuntu/{overrides ubuntu_latest}} mount/fuse objectstore-ec/bluestore-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down mon-debug pg_health session_timeout} ranks/1 tasks/dbench validater/valgrind} 2
Failure Reason:

valgrind error: Leak_StillReachable calloc calloc _dl_check_map_versions

pass 7699576 2024-05-09 05:02:36 2024-05-09 12:08:35 2024-05-09 12:39:10 0:30:35 0:20:26 0:10:09 smithi main ubuntu 22.04 smoke/basic/{clusters/{fixed-3-cephfs openstack} objectstore/bluestore-bitmap supported-random-distro$/{ubuntu_latest} tasks/{0-install test/rados_python}} 3
pass 7699549 2024-05-09 03:11:23 2024-05-09 04:07:15 2024-05-09 04:30:20 0:23:05 0:13:06 0:09:59 smithi main centos 9.stream orch:cephadm/osds/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-ops/rmdir-reactivate} 2
pass 7699428 2024-05-09 00:24:32 2024-05-09 05:52:57 2024-05-09 06:37:57 0:45:00 0:35:03 0:09:57 smithi main centos 9.stream fs:upgrade:mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mgr mon osd} fail_fs/yes overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn pg_health syntax} roles tasks/{0-from/reef/{v18.2.1} 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-inline/yes 4-verify} 2-client/kclient 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} 2
pass 7699109 2024-05-08 22:11:12 2024-05-09 05:04:17 2024-05-09 05:52:57 0:48:40 0:38:29 0:10:11 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/legacy wsync/no} objectstore-ec/bluestore-ec-root omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/1 standby-replay tasks/{0-subvolume/{with-namespace-isolated-and-quota} 1-check-counter 2-scrub/yes 3-snaps/yes 4-flush/yes 5-quiesce/with-quiesce 6-workunit/direct_io}} 3
pass 7699078 2024-05-08 22:10:40 2024-05-09 04:30:18 2024-05-09 05:05:23 0:35:05 0:24:51 0:10:14 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/stock/{centos_9.stream k-stock} ms-die-on-skipped}} ms_mode/secure wsync/yes} objectstore-ec/bluestore-bitmap omap_limit/10 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/automatic export-check n/5 replication/always} standby-replay tasks/{0-subvolume/{with-quota} 1-check-counter 2-scrub/no 3-snaps/no 4-flush/no 5-quiesce/no 6-workunit/direct_io}} 3
fail 7699013 2024-05-08 22:09:31 2024-05-09 02:17:56 2024-05-09 03:57:52 1:39:56 1:30:24 0:09:32 smithi main centos 9.stream fs/verify/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/1a5s-mds-1c-client conf/{client mds mgr mon osd} distro/{centos_latest} mount/fuse objectstore-ec/bluestore-comp-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down mon-debug pg_health session_timeout} ranks/5 tasks/fsstress validater/valgrind} 2
Failure Reason:

valgrind error: Leak_StillReachable malloc malloc strdup

pass 7698973 2024-05-08 22:08:50 2024-05-09 01:37:11 2024-05-09 02:18:33 0:41:22 0:32:23 0:08:59 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/legacy wsync/no} objectstore-ec/bluestore-ec-root omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/automatic export-check n/5 replication/always} standby-replay tasks/{0-subvolume/{with-namespace-isolated-and-quota} 1-check-counter 2-scrub/no 3-snaps/no 4-flush/yes 5-quiesce/with-quiesce 6-workunit/direct_io}} 3
pass 7698924 2024-05-08 22:07:58 2024-05-09 00:49:24 2024-05-09 01:37:15 0:47:51 0:34:36 0:13:15 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/fuse objectstore-ec/bluestore-comp-ec-root omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/automatic export-check n/3 replication/default} standby-replay tasks/{0-subvolume/{with-no-extra-options} 1-check-counter 2-scrub/no 3-snaps/yes 4-flush/no 5-quiesce/with-quiesce 6-workunit/suites/fsstress}} 3
pass 7698880 2024-05-08 22:07:11 2024-05-09 00:11:58 2024-05-09 00:49:55 0:37:57 0:27:49 0:10:08 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/stock/{centos_9.stream k-stock} ms-die-on-skipped}} ms_mode/crc wsync/yes} objectstore-ec/bluestore-comp omap_limit/10 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/random export-check n/5 replication/default} standby-replay tasks/{0-subvolume/{with-quota} 1-check-counter 2-scrub/yes 3-snaps/no 4-flush/yes 5-quiesce/no 6-workunit/suites/fsstress}} 3
fail 7698820 2024-05-08 21:49:02 2024-05-09 11:23:06 2024-05-09 11:54:54 0:31:48 0:14:35 0:17:13 smithi main centos 8.stream krbd/ms_modeless/{bluestore-bitmap ceph/ceph clusters/fixed-3 conf tasks/krbd_rxbounce} 3
Failure Reason:

Command failed (workunit test rbd/krbd_rxbounce.sh) on smithi105 with status 127: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=cd0c865869019ea8bf6c06db41d90d8ddecccdd0 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rbd/krbd_rxbounce.sh'

pass 7698790 2024-05-08 21:48:31 2024-05-09 10:51:50 2024-05-09 11:23:40 0:31:50 0:15:21 0:16:29 smithi main centos 8.stream krbd/rbd-nomount/{bluestore-bitmap clusters/fixed-3 conf install/ceph ms_mode/secure msgr-failures/few tasks/krbd_fallocate} 3
pass 7698693 2024-05-08 21:27:43 2024-05-09 08:24:12 2024-05-09 09:19:38 0:55:26 0:46:43 0:08:43 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/stock/{centos_9.stream k-stock} ms-die-on-skipped}} ms_mode/legacy wsync/yes} objectstore-ec/bluestore-comp-ec-root omap_limit/10 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/random export-check n/5 replication/default} standby-replay tasks/{0-subvolume/{with-namespace-isolated-and-quota} 1-check-counter 2-scrub/yes 3-snaps/yes 4-flush/yes 5-workunit/suites/iogen}} 3
pass 7698646 2024-05-08 21:26:52 2024-05-09 07:46:46 2024-05-09 08:24:24 0:37:38 0:25:47 0:11:51 smithi main ubuntu 22.04 fs/multifs/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/1a3s-mds-2c-client conf/{client mds mgr mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} objectstore-ec/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down mon-debug pg_health} tasks/multifs-auth} 2
pass 7698561 2024-05-08 21:25:26 2024-05-09 06:36:05 2024-05-09 07:48:44 1:12:39 0:59:48 0:12:51 smithi main ubuntu 22.04 fs/multiclient/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/1-mds-3-client conf/{client mds mgr mon osd} distros/ubuntu_latest mount/fuse objectstore-ec/bluestore-comp overrides/{ignorelist_health ignorelist_wrongly_marked_down pg_health} tasks/cephfs_misc_tests} 5
pass 7697267 2024-05-08 04:01:38 2024-05-08 06:02:35 2024-05-08 07:28:17 1:25:42 1:07:51 0:17:51 smithi main centos 9.stream fs:workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/secure wsync/yes} objectstore-ec/bluestore-comp omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/automatic export-check n/5 replication/always} standby-replay tasks/{0-subvolume/{with-namespace-isolated} 1-check-counter 2-scrub/yes 3-snaps/no 4-flush/yes 5-quiesce/no 6-workunit/fs/misc}} 3
fail 7697243 2024-05-08 04:01:18 2024-05-08 05:22:13 2024-05-08 06:03:30 0:41:17 0:31:39 0:09:38 smithi main centos 9.stream fs:workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/secure wsync/yes} objectstore-ec/bluestore-comp omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/automatic export-check n/5 replication/default} standby-replay tasks/{0-subvolume/{with-namespace-isolated-and-quota} 1-check-counter 2-scrub/yes 3-snaps/no 4-flush/no 5-quiesce/no 6-workunit/fs/norstats}} 3
Failure Reason:

"2024-05-08T05:50:00.000298+0000 mon.a (mon.0) 985 : cluster [WRN] application not enabled on pool 'cephfs_metadata'" in cluster log

fail 7696840 2024-05-08 00:45:43 2024-05-08 04:19:17 2024-05-08 05:19:34 1:00:17 0:49:22 0:10:55 smithi main centos 8.stream rados/thrash-old-clients/{0-distro$/{centos_8.stream_container_tools} 0-size-min-size-overrides/2-size-2-min-size 1-install/nautilus-v2only backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/on mon_election/connectivity msgr-failures/few rados thrashers/default thrashosds-health workloads/snaps-few-objects} 3
Failure Reason:

"2024-05-08T04:50:00.000129+0000 mon.a (mon.0) 1098 : cluster [WRN] Health detail: HEALTH_WARN noscrub,nodeep-scrub flag(s) set" in cluster log