Name Machine Type Up Locked Locked Since Locked By OS Type OS Version Arch Description
smithi190.front.sepia.ceph.com smithi True True 2024-07-26 23:53:09.928190 scheduled_yuriw@teuthology ubuntu 22.04 x86_64 /home/teuthworker/archive/yuriw-2024-07-26_21:23:43-rados-wip-yuri-testing-2024-07-26-0628-distro-default-smithi/7820597
Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
running 7820597 2024-07-26 21:33:52 2024-07-26 23:53:09 2024-07-27 00:07:04 0:15:12 smithi main ubuntu 22.04 rados/cephadm/workunits/{0-distro/ubuntu_22.04 agent/on mon_election/connectivity task/test_cephadm_repos} 1
pass 7820537 2024-07-26 21:32:43 2024-07-26 23:20:31 2024-07-26 23:53:06 0:32:35 0:21:51 0:10:44 smithi main centos 9.stream rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-1} backoff/peering ceph clusters/{fixed-4 openstack} crc-failures/default d-balancer/upmap-read mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-snappy rados supported-random-distro$/{centos_latest} thrashers/pggrow_host thrashosds-health workloads/small-objects} 4
fail 7820481 2024-07-26 21:31:36 2024-07-26 22:44:54 2024-07-26 23:20:59 0:36:05 0:25:59 0:10:06 smithi main ubuntu 22.04 rados/upgrade/parallel/{0-random-distro$/{ubuntu_22.04} 0-start 1-tasks mon_election/classic upgrade-sequence workload/{ec-rados-default rados_api rados_loadgenbig rbd_import_export test_rbd_api test_rbd_python}} 2
Failure Reason:

"2024-07-26T23:11:57.550752+0000 mon.a (mon.0) 536 : cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log

pass 7820431 2024-07-26 21:30:34 2024-07-26 22:18:20 2024-07-26 22:45:12 0:26:52 0:12:33 0:14:19 smithi main centos 9.stream rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-1} backoff/normal ceph clusters/{fixed-4 openstack} crc-failures/bad_map_crc_failure d-balancer/read mon_election/classic msgr-failures/fastclose msgr/async-v1only objectstore/bluestore-hybrid rados supported-random-distro$/{centos_latest} thrashers/morepggrow thrashosds-health workloads/cache} 4
pass 7820391 2024-07-26 21:29:47 2024-07-26 21:57:22 2024-07-26 22:18:47 0:21:25 0:10:02 0:11:23 smithi main ubuntu 22.04 rados/multimon/{clusters/3 mon_election/classic msgr-failures/few msgr/async-v1only no_pools objectstore/bluestore-hybrid rados supported-random-distro$/{ubuntu_latest} tasks/mon_clock_with_skews} 2
fail 7819285 2024-07-26 13:25:14 2024-07-26 18:34:10 2024-07-26 21:57:38 3:23:28 3:11:37 0:11:51 smithi main centos 9.stream rbd/pwl-cache/home/{1-base/install 2-cluster/{fix-2 openstack} 3-supported-random-distro$/{centos_latest} 4-cache-path 5-cache-mode/rwl 6-cache-size/1G 7-workloads/c_api_tests_with_defaults conf/{disable-pool-app}} 2
Failure Reason:

Command failed (workunit test rbd/test_librbd.sh) on smithi190 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=387735b04c21d62e21975a50a7f6c06a95b3cf6d TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 RBD_FEATURES=61 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rbd/test_librbd.sh'

pass 7819013 2024-07-26 08:55:45 2024-07-26 09:52:26 2024-07-26 10:22:05 0:29:39 0:15:58 0:13:41 smithi main centos 9.stream rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{default} 3-scrub-overrides/{max-simultaneous-scrubs-1} backoff/peering ceph clusters/{fixed-4 openstack} crc-failures/default d-balancer/read mon_election/connectivity msgr-failures/osd-delay msgr/async-v1only objectstore/{bluestore-options/write$/{write_random} bluestore/bluestore-comp-lz4} rados supported-random-distro$/{centos_latest} thrashers/mapgap thrashosds-health workloads/write_fadvise_dontneed} 4
pass 7819000 2024-07-26 08:55:31 2024-07-26 09:20:46 2024-07-26 09:53:38 0:32:52 0:15:15 0:17:37 smithi main centos 9.stream rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-active-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/peering ceph clusters/{fixed-4 openstack} crc-failures/bad_map_crc_failure d-balancer/on mon_election/classic msgr-failures/few msgr/async-v1only objectstore/{bluestore-options/write$/{write_v1} bluestore/bluestore-bitmap} rados supported-random-distro$/{centos_latest} thrashers/mapgap_host thrashosds-health workloads/redirect_promote_tests} 4
pass 7818990 2024-07-26 08:55:21 2024-07-26 09:02:50 2024-07-26 09:27:11 0:24:21 0:12:00 0:12:21 smithi main centos 9.stream rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-5} backoff/normal ceph clusters/{fixed-4 openstack} crc-failures/default d-balancer/read mon_election/connectivity msgr-failures/osd-delay msgr/async objectstore/{bluestore-options/write$/{write_random} bluestore/bluestore-comp-lz4} rados supported-random-distro$/{centos_latest} thrashers/pggrow thrashosds-health workloads/dedup-io-mixed} 4
pass 7818936 2024-07-26 05:13:08 2024-07-26 05:29:44 2024-07-26 07:19:31 1:49:47 1:36:38 0:13:09 smithi main centos 9.stream fs:volumes/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/1a3s-mds-4c-client conf/{client mds mgr mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/stock/{centos_9.stream k-stock} ms-die-on-skipped}} objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile pg_health} tasks/volumes/{overrides test/basic}} 2
dead 7818839 2024-07-26 04:23:19 2024-07-26 12:27:03 2024-07-26 18:36:54 6:09:51 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/fuse objectstore-ec/bluestore-bitmap omap_limit/10 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/random export-check n/5 replication/default} standby-replay tasks/{0-subvolume/{with-namespace-isolated} 1-check-counter 2-scrub/no 3-snaps/no 4-flush/no 5-quiesce/with-quiesce 6-workunit/suites/pjd}} 3
Failure Reason:

hit max job timeout

pass 7818795 2024-07-26 04:22:28 2024-07-26 11:33:52 2024-07-26 12:27:05 0:53:13 0:41:17 0:11:56 smithi main centos 9.stream fs/thrash/multifs/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/1a3s-mds-2c-client conf/{client mds mgr mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/stock/{centos_9.stream k-stock} ms-die-on-skipped}} msgr-failures/none objectstore/bluestore-bitmap overrides/{client-shutdown frag ignorelist_health ignorelist_wrongly_marked_down multifs pg_health session_timeout thrashosds-health} tasks/{1-thrash/mon 2-workunit/cfuse_workunit_snaptests}} 2
fail 7818756 2024-07-26 04:21:43 2024-07-26 10:21:16 2024-07-26 11:34:35 1:13:19 0:59:38 0:13:41 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/crc wsync/yes} objectstore-ec/bluestore-comp-ec-root omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/automatic export-check n/5 replication/default} standby-replay tasks/{0-subvolume/{with-namespace-isolated-and-quota} 1-check-counter 2-scrub/yes 3-snaps/yes 4-flush/yes 5-quiesce/no 6-workunit/suites/ffsb}} 3
Failure Reason:

Command failed (workunit test suites/ffsb.sh) on smithi066 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=e8f88bb1fe0f3b5172ca5b6b08c2be739c07e528 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/ffsb.sh'

pass 7818732 2024-07-26 04:20:51 2024-07-26 08:30:16 2024-07-26 09:02:44 0:32:28 0:19:10 0:13:18 smithi main centos 9.stream fs/multifs/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/1a3s-mds-2c-client conf/{client mds mgr mon osd} distro/{centos_latest} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/stock/{centos_9.stream k-stock} ms-die-on-skipped}} objectstore-ec/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down mon-debug pg_health} tasks/multifs-auth} 2
pass 7818715 2024-07-26 04:20:31 2024-07-26 08:03:10 2024-07-26 08:32:26 0:29:16 0:14:38 0:14:38 smithi main centos 9.stream fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/1a5s-mds-1c-client conf/{client mds mgr mon osd} distro/{centos_latest} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} msgr-failures/none objectstore-ec/bluestore-ec-root overrides/{client-shutdown frag ignorelist_health ignorelist_wrongly_marked_down pg_health prefetch_dirfrags/no prefetch_dirfrags/yes prefetch_entire_dirfrags/no prefetch_entire_dirfrags/yes races session_timeout thrashosds-health} ranks/3 tasks/{1-thrash/mds 2-workunit/fs/trivial_sync}} 2
pass 7818699 2024-07-26 04:20:13 2024-07-26 07:19:32 2024-07-26 08:03:58 0:44:26 0:31:35 0:12:51 smithi main centos 9.stream fs/upgrade/featureful_client/old_client/{bluestore-bitmap centos_9.stream clusters/1-mds-2-client-micro conf/{client mds mgr mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down multimds/no multimds/yes pg-warn pg_health} tasks/{0-from/reef 1-client 2-upgrade 3-compat_client/no}} 3
pass 7818487 2024-07-26 01:22:25 2024-07-26 01:46:30 2024-07-26 03:01:24 1:14:54 1:03:51 0:11:03 smithi main centos 9.stream rgw/verify/{0-install accounts$/{main-tenant} clusters/fixed-2 datacache/rgw-datacache frontend/beast ignore-pg-availability inline-data$/{on} msgr-failures/few objectstore/bluestore-bitmap overrides proto/http rgw_pool_type/replicated s3tests-branch sharding$/{default} striping$/{stripe-greater-than-chunk} supported-random-distro$/{centos_latest} tasks/{bucket-check cls mp_reupload ragweed reshard s3tests-java s3tests versioning} validater/valgrind} 2
pass 7818337 2024-07-25 21:33:38 2024-07-26 04:59:27 2024-07-26 05:30:44 0:31:17 0:18:58 0:12:19 smithi main ubuntu 22.04 powercycle/osd/{clusters/3osd-1per-target ignorelist_health objectstore/bluestore-low-osd-mem-target powercycle/default supported-distros/ubuntu_latest tasks/readwrite thrashosds-health} 4
pass 7818320 2024-07-25 21:33:21 2024-07-26 04:38:31 2024-07-26 04:59:37 0:21:06 0:09:25 0:11:41 smithi main centos 9.stream powercycle/osd/{clusters/3osd-1per-target ignorelist_health objectstore/bluestore-hybrid powercycle/default supported-distros/centos_latest tasks/cfuse_workunit_suites_truncate_delay thrashosds-health} 4
pass 7817675 2024-07-25 05:01:19 2024-07-26 04:03:59 2024-07-26 04:39:06 0:35:07 0:23:45 0:11:22 smithi main ubuntu 22.04 smoke/basic/{clusters/{fixed-3-cephfs openstack} objectstore/bluestore-bitmap supported-random-distro$/{ubuntu_latest} tasks/{0-install test/rados_cls_all}} 3