Name Machine Type Up Locked Locked Since Locked By OS Type OS Version Arch Description
smithi135.front.sepia.ceph.com smithi True True 2024-07-27 01:56:52.376146 scheduled_pdonnell@teuthology centos 9.stream x86_64 /home/teuthworker/archive/pdonnell-2024-07-26_23:45:44-fs-wip-pdonnell-testing-20240726.202642-debug-distro-default-smithi/7820926
Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
running 7820926 2024-07-26 23:48:07 2024-07-27 01:56:12 2024-07-27 03:34:17 1:39:28 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/fuse objectstore-ec/bluestore-bitmap omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/random export-check n/5 replication/always} standby-replay tasks/{0-subvolume/{with-namespace-isolated-and-quota} 1-check-counter 2-scrub/yes 3-snaps/no 4-flush/no 5-quiesce/with-quiesce 6-workunit/kernel_untar_build}} 3
pass 7820898 2024-07-26 23:47:37 2024-07-27 01:32:45 2024-07-27 01:56:44 0:23:59 0:12:58 0:11:01 smithi main centos 9.stream fs/thrash/multifs/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/1a3s-mds-2c-client conf/{client mds mgr mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/stock/{centos_9.stream k-stock} ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore/bluestore-bitmap overrides/{client-shutdown frag ignorelist_health ignorelist_wrongly_marked_down multifs pg_health session_timeout thrashosds-health} tasks/{1-thrash/mon 2-workunit/cfuse_workunit_suites_pjd}} 2
pass 7820863 2024-07-26 23:46:58 2024-07-27 01:00:53 2024-07-27 01:33:09 0:32:16 0:19:39 0:12:37 smithi main centos 9.stream fs/upgrade/featureful_client/old_client/{bluestore-bitmap centos_9.stream clusters/1-mds-2-client-micro conf/{client mds mgr mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down multimds/no multimds/yes pg-warn pg_health} tasks/{0-from/reef 1-client 2-upgrade 3-compat_client/yes}} 3
pass 7820068 2024-07-26 16:50:01 2024-07-26 21:38:25 2024-07-27 01:02:14 3:23:49 2:50:17 0:33:32 smithi main centos 9.stream rados/objectstore/{backends/objectstore-bluestore-a supported-random-distro$/{centos_latest}} 1
pass 7819710 2024-07-26 16:09:24 2024-07-26 17:01:02 2024-07-26 17:43:03 0:42:01 0:30:05 0:11:56 smithi main ubuntu 22.04 fs:volumes/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/1a3s-mds-4c-client conf/{client mds mgr mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile pg_health} tasks/volumes/{overrides test/clone-progress}} 2
fail 7819524 2024-07-26 13:28:27 2024-07-26 21:06:37 2024-07-26 21:38:53 0:32:16 0:21:49 0:10:27 smithi main centos 9.stream rados/dashboard/{0-single-container-host debug/mgr mon_election/classic random-objectstore$/{bluestore-bitmap} tasks/e2e} 2
Failure Reason:

"2024-07-26T21:36:26.985683+0000 mon.a (mon.0) 589 : cluster [WRN] Health check failed: Degraded data redundancy: 2/6 objects degraded (33.333%), 1 pg degraded (PG_DEGRADED)" in cluster log

pass 7819496 2024-07-26 13:27:57 2024-07-26 20:46:49 2024-07-26 21:06:38 0:19:49 0:09:38 0:10:11 smithi main centos 9.stream rados/singleton-nomsgr/{all/crushdiff mon_election/connectivity rados supported-random-distro$/{centos_latest}} 1
pass 7819462 2024-07-26 13:27:22 2024-07-26 20:26:57 2024-07-26 20:46:49 0:19:52 0:09:22 0:10:30 smithi main centos 9.stream rados/singleton/{all/test-noautoscale-flag mon_election/classic msgr-failures/many msgr/async-v1only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{centos_latest}} 1
fail 7819404 2024-07-26 13:26:19 2024-07-26 19:46:28 2024-07-26 20:27:38 0:41:10 0:30:29 0:10:41 smithi main centos 9.stream rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-snappy rados tasks/mon_recovery validater/valgrind} 2
Failure Reason:

valgrind error: Leak_StillReachable operator new[](unsigned long) UnknownInlinedFun UnknownInlinedFun

fail 7819201 2024-07-26 13:24:38 2024-07-26 17:41:55 2024-07-26 19:46:23 2:04:28 1:53:15 0:11:13 smithi main centos 9.stream rados/upgrade/parallel/{0-random-distro$/{centos_9.stream} 0-start 1-tasks mon_election/classic upgrade-sequence workload/{ec-rados-default rados_api rados_loadgenbig rbd_import_export test_rbd_api test_rbd_python}} 2
Failure Reason:

"2024-07-26T18:02:19.689002+0000 mon.a (mon.0) 529 : cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log

pass 7819148 2024-07-26 13:24:18 2024-07-26 15:58:11 2024-07-26 17:03:15 1:05:04 0:53:33 0:11:31 smithi main centos 9.stream rbd/mirror-thrash/{base/install clients/mirror cluster/{2-node openstack} conf/{disable-pool-app} msgr-failures/few objectstore/bluestore-stupid policy/simple rbd-mirror/four-per-cluster supported-random-distro$/{centos_latest} workloads/rbd-mirror-journal-stress-workunit} 2
pass 7819138 2024-07-26 13:24:13 2024-07-26 15:32:35 2024-07-26 16:00:04 0:27:29 0:14:10 0:13:19 smithi main ubuntu 22.04 rgw/thrash/{clusters/fixed-2 frontend/beast ignore-pg-availability install objectstore/bluestore-bitmap s3tests-branch thrasher/default thrashosds-health ubuntu_latest workload/rgw_user_quota} 2
pass 7819112 2024-07-26 13:24:03 2024-07-26 14:58:31 2024-07-26 15:36:19 0:37:48 0:28:26 0:09:22 smithi main ubuntu 22.04 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} 3-scrub-overrides/{max-simultaneous-scrubs-5} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/on mon_election/classic msgr-failures/osd-delay msgr/async-v2only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{ubuntu_latest} thrashers/morepggrow thrashosds-health workloads/rados_api_tests} 2
pass 7818977 2024-07-26 07:24:52 2024-07-26 07:50:06 2024-07-26 08:18:28 0:28:22 0:14:36 0:13:46 smithi main centos 9.stream rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/peering ceph clusters/{fixed-4 openstack} crc-failures/bad_map_crc_failure d-balancer/on mon_election/classic msgr-failures/few msgr/async-v1only objectstore/{bluestore-options/write$/{write_random} bluestore/bluestore-bitmap} rados supported-random-distro$/{centos_latest} thrashers/mapgap_host thrashosds-health workloads/redirect_promote_tests} 4
pass 7818908 2024-07-26 05:11:37 2024-07-26 05:14:26 2024-07-26 05:40:05 0:25:39 0:16:26 0:09:13 smithi main centos 9.stream fs:volumes/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/1a3s-mds-4c-client conf/{client mds mgr mon osd} distro/{centos_latest} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/stock/{centos_9.stream k-stock} ms-die-on-skipped}} objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile pg_health} tasks/volumes/{overrides test/misc}} 2
dead 7818745 2024-07-26 04:21:29 2024-07-26 08:49:35 2024-07-26 15:00:43 6:11:08 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/secure wsync/yes} objectstore-ec/bluestore-ec-root omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/random export-check n/3 replication/default} standby-replay tasks/{0-subvolume/{with-quota} 1-check-counter 2-scrub/yes 3-snaps/yes 4-flush/yes 5-quiesce/no 6-workunit/suites/blogbench}} 3
Failure Reason:

hit max job timeout

pass 7818722 2024-07-26 04:20:39 2024-07-26 08:18:28 2024-07-26 08:49:47 0:31:19 0:20:24 0:10:55 smithi main centos 9.stream fs/32bits/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/fixed-2-ucephfs conf/{client mds mgr mon osd} distro/{centos_latest} mount/fuse objectstore-ec/bluestore-comp-ec-root overrides/{faked-ino ignorelist_health ignorelist_wrongly_marked_down pg_health} tasks/cfuse_workunit_suites_fsstress} 2
fail 7818629 2024-07-26 02:09:27 2024-07-26 06:06:46 2024-07-26 07:53:13 1:46:27 1:36:01 0:10:26 smithi main centos 9.stream upgrade/quincy-x/parallel/{0-random-distro$/{centos_9.stream_runc} 0-start 1-tasks mon_election/connectivity upgrade-sequence workload/{ec-rados-default rados_api rados_loadgenbig rbd_import_export test_rbd_api test_rbd_python}} 2
Failure Reason:

"1721975327.8923042 mon.a (mon.0) 653 : cluster [WRN] Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)" in cluster log

pass 7818614 2024-07-26 02:09:11 2024-07-26 05:39:37 2024-07-26 06:07:42 0:28:05 0:18:33 0:09:32 smithi main centos 9.stream upgrade/cephfs/featureful_client/upgraded_client/{bluestore-bitmap centos_9.stream clusters/1-mds-2-client-micro conf/{client mds mgr mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down multimds/no multimds/yes pg-warn pg_health} tasks/{0-from/quincy 1-client 2-upgrade 3-client-upgrade 4-compat_client 5-client-sanity}} 3
pass 7818561 2024-07-26 01:25:21 2024-07-26 02:37:49 2024-07-26 03:45:44 1:07:55 0:57:59 0:09:56 smithi main ubuntu 22.04 rgw/verify/{0-install accounts$/{none} clusters/fixed-2 datacache/no_datacache frontend/beast ignore-pg-availability inline-data$/{off} msgr-failures/few objectstore/bluestore-bitmap overrides proto/https rgw_pool_type/ec s3tests-branch sharding$/{default} striping$/{stripe-equals-chunk} supported-random-distro$/{ubuntu_latest} tasks/{bucket-check cls mp_reupload ragweed reshard s3tests-java s3tests versioning} validater/lockdep} 2