Name | Machine Type | Up | Locked | Locked Since | Locked By | OS Type | OS Version | Arch | Description |
---|---|---|---|---|---|---|---|---|---|
smithi151.front.sepia.ceph.com | smithi | True | True | 2024-03-28 15:06:54.214242 | scheduled_pdonnell@teuthology | centos | 9 | x86_64 | /home/teuthworker/archive/pdonnell-2024-03-28_07:14:07-fs-wip-batrick-testing-20240327.230800-distro-default-smithi/7628013 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
running | 7628013 | 2024-03-28 07:18:12 | 2024-03-28 15:06:23 | 2024-03-28 15:40:22 | 0:35:17 | smithi | main | centos | 9.stream | fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/stock/{centos_9.stream k-stock} ms-die-on-skipped}} ms_mode/crc wsync/no} objectstore-ec/bluestore-comp-ec-root omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/automatic export-check n/5 replication/always} standby-replay tasks/{0-subvolume/{with-namespace-isolated} 1-check-counter 2-scrub/yes 3-snaps/yes 4-flush/yes 5-workunit/suites/fsx}} | 3 | |||
pass | 7627955 | 2024-03-28 07:17:17 | 2024-03-28 14:23:13 | 2024-03-28 15:06:37 | 0:43:24 | 0:34:12 | 0:09:12 | smithi | main | centos | 9.stream | fs/snaps/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/1a3s-mds-1c-client conf/{client mds mgr mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/stock/{centos_9.stream k-stock} ms-die-on-skipped}} objectstore-ec/bluestore-comp overrides/{ignorelist_health ignorelist_wrongly_marked_down pg_health} tasks/workunit/snaps} | 2 | |
pass | 7627931 | 2024-03-28 07:16:54 | 2024-03-28 14:01:14 | 2024-03-28 14:23:12 | 0:21:58 | 0:11:44 | 0:10:14 | smithi | main | centos | 9.stream | fs/functional/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/1a3s-mds-4c-client conf/{client mds mgr mon osd} distro/{centos_latest} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/stock/{centos_9.stream k-stock} ms-die-on-skipped}} objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile pg_health} subvol_versions/create_subvol_version_v1 tasks/sessionmap} | 2 | |
fail | 7627860 | 2024-03-28 07:15:44 | 2024-03-28 13:06:36 | 2024-03-28 13:54:36 | 0:48:00 | 0:36:20 | 0:11:40 | smithi | main | centos | 9.stream | fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/legacy wsync/yes} objectstore-ec/bluestore-comp omap_limit/10 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/random export-check n/3 replication/always} standby-replay tasks/{0-subvolume/{with-quota} 1-check-counter 2-scrub/yes 3-snaps/no 4-flush/no 5-workunit/fs/test_o_trunc}} | 3 | |
Failure Reason:
"2024-03-28T13:32:15.001536+0000 mon.a (mon.0) 685 : cluster [WRN] Health check failed: Reduced data availability: 1 pg inactive (PG_AVAILABILITY)" in cluster log |
||||||||||||||
pass | 7627807 | 2024-03-28 07:14:54 | 2024-03-28 12:34:58 | 2024-03-28 13:07:08 | 0:32:10 | 0:16:36 | 0:15:34 | smithi | main | ubuntu | 22.04 | fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/1a5s-mds-1c-client conf/{client mds mgr mon osd} distro/{ubuntu_latest} mount/fuse msgr-failures/osd-mds-delay objectstore-ec/bluestore-comp-ec-root overrides/{client-shutdown frag ignorelist_health ignorelist_wrongly_marked_down prefetch_dirfrags/yes prefetch_entire_dirfrags/yes races session_timeout thrashosds-health} ranks/1 tasks/{1-thrash/mds 2-workunit/fs/trivial_sync}} | 2 | |
pass | 7627653 | 2024-03-27 22:56:12 | 2024-03-27 23:07:53 | 2024-03-27 23:45:57 | 0:38:04 | 0:27:02 | 0:11:02 | smithi | main | ubuntu | 22.04 | rgw/lifecycle/{cluster ignore-pg-availability overrides s3tests-branch supported-random-distro$/{ubuntu_latest} tasks/rgw_s3tests} | 1 | |
pass | 7626686 | 2024-03-27 15:04:01 | 2024-03-27 22:38:37 | 2024-03-27 23:07:51 | 0:29:14 | 0:17:07 | 0:12:07 | smithi | main | ubuntu | 22.04 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-active-recovery} 3-scrub-overrides/{default} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/fastclose msgr/async-v1only objectstore/bluestore-comp-zlib rados supported-random-distro$/{ubuntu_latest} thrashers/morepggrow thrashosds-health workloads/set-chunks-read} | 2 | |
pass | 7626594 | 2024-03-27 15:02:49 | 2024-03-27 21:47:25 | 2024-03-27 22:39:41 | 0:52:16 | 0:44:29 | 0:07:47 | smithi | main | rhel | 8.6 | rados/singleton/{all/thrash-eio mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-comp-zlib rados supported-random-distro$/{rhel_8}} | 2 | |
pass | 7626550 | 2024-03-27 15:02:14 | 2024-03-27 21:23:34 | 2024-03-27 21:48:02 | 0:24:28 | 0:13:13 | 0:11:15 | smithi | main | ubuntu | 20.04 | rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/classic msgr-failures/fastclose objectstore/bluestore-comp-zlib rados recovery-overrides/{more-async-recovery} supported-random-distro$/{ubuntu_20.04} thrashers/careful thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} | 4 | |
dead | 7625968 | 2024-03-27 08:49:25 | 2024-03-27 09:16:58 | 2024-03-27 21:25:39 | 12:08:41 | smithi | main | centos | 9.stream | crimson-rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore thrashers/default thrashosds-health workloads/snaps-few-objects} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 7625724 | 2024-03-27 05:31:11 | 2024-03-27 08:45:49 | 2024-03-27 09:17:08 | 0:31:19 | 0:20:51 | 0:10:28 | smithi | main | centos | 9.stream | fs/functional/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/1a3s-mds-4c-client conf/{client mds mgr mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/stock/{centos_9.stream k-stock} ms-die-on-skipped}} objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile pg_health} subvol_versions/create_subvol_version_v1 tasks/snapshots} | 2 | |
fail | 7625649 | 2024-03-27 05:29:40 | 2024-03-27 07:07:26 | 2024-03-27 08:39:25 | 1:31:59 | 1:17:51 | 0:14:08 | smithi | main | centos | 9.stream | fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/fuse objectstore-ec/bluestore-bitmap omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/automatic export-check n/5 replication/always} standby-replay tasks/{0-subvolume/{with-namespace-isolated} 1-check-counter 2-scrub/yes 3-snaps/yes 4-flush/yes 5-workunit/suites/ffsb}} | 3 | |
Failure Reason:
"2024-03-27T07:46:57.085993+0000 mds.b (mds.0) 29 : cluster [WRN] Scrub error on inode 0x100000003c6 (/volumes/qa/sv_0/37681ca1-8638-4747-bc53-1ed649cd8eb9/client.0/tmp/ffsb/.deps) see mds.b log and `damage ls` output for details" in cluster log |
||||||||||||||
pass | 7625579 | 2024-03-27 05:28:11 | 2024-03-27 05:53:54 | 2024-03-27 07:09:47 | 1:15:53 | 0:32:03 | 0:43:50 | smithi | main | centos | 8.stream | fs/upgrade/featureful_client/old_client/{bluestore-bitmap centos_8.stream clusters/1-mds-2-client-micro conf/{client mds mgr mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down multimds/no pg-warn} tasks/{0-octopus 1-client 2-upgrade 3-compat_client/no}} | 3 | |
fail | 7625246 | 2024-03-27 03:46:08 | 2024-03-28 00:31:09 | 2024-03-28 07:11:05 | 6:39:56 | 6:25:32 | 0:14:24 | smithi | main | centos | 9.stream | fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/stock/{centos_9.stream k-stock} ms-die-on-skipped}} ms_mode/secure wsync/yes} objectstore-ec/bluestore-ec-root omap_limit/10 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{balancer/random export-check n/5 replication/always} standby-replay tasks/{0-subvolume/{with-quota} 1-check-counter 2-scrub/yes 3-snaps/no 4-flush/yes 5-workunit/suites/pjd}} | 3 | |
Failure Reason:
Command failed (workunit test suites/pjd.sh) on smithi099 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=3be9a5c9bc793e11e2800a8c0c696e8b46742033 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/pjd.sh' |
||||||||||||||
fail | 7624787 | 2024-03-27 01:09:25 | 2024-03-27 01:10:53 | 2024-03-27 01:59:34 | 0:48:41 | 0:38:38 | 0:10:03 | smithi | main | ubuntu | 20.04 | fs:functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{ubuntu_20.04} mount/fuse objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} subvol_versions/create_subvol_version_v1 tasks/admin} | 2 | |
Failure Reason:
Test failure: test_single_path_rootsquash (tasks.cephfs.test_admin.TestFsAuthorize) |
||||||||||||||
pass | 7624451 | 2024-03-26 21:41:19 | 2024-03-27 23:45:49 | 2024-03-28 00:35:21 | 0:49:32 | 0:38:25 | 0:11:07 | smithi | main | centos | 9.stream | rgw/verify/{0-install clusters/fixed-2 datacache/rgw-datacache frontend/beast ignore-pg-availability inline-data$/{on} msgr-failures/few objectstore/bluestore-bitmap overrides proto/https rgw_pool_type/replicated s3tests-branch sharding$/{default} striping$/{stripe-equals-chunk} supported-random-distro$/{centos_latest} tasks/{bucket-check cls mp_reupload ragweed reshard s3tests-java s3tests versioning} validater/lockdep} | 2 | |
pass | 7624286 | 2024-03-26 21:18:18 | 2024-03-27 05:55:24 | 936 | smithi | main | centos | 9.stream | rbd/qemu/{cache/none clusters/{fixed-3 openstack} conf/{disable-pool-app} data-pool/none features/journaling msgr-failures/few objectstore/bluestore-comp-lz4 supported-random-distro$/{centos_latest} workloads/qemu_fsstress} | 3 | ||||
pass | 7624192 | 2024-03-26 21:16:44 | 2024-03-27 01:59:24 | 2024-03-27 05:27:51 | 3:28:27 | 3:16:39 | 0:11:48 | smithi | main | centos | 9.stream | rbd/encryption/{cache/none clusters/{fixed-3 openstack} conf/{disable-pool-app} data-pool/ec features/defaults msgr-failures/few objectstore/bluestore-hybrid supported-random-distro$/{centos_latest} workloads/qemu_xfstests_luks2_luks1} | 3 | |
fail | 7624169 | 2024-03-26 20:33:53 | 2024-03-26 21:36:10 | 2024-03-26 22:02:12 | 0:26:02 | 0:15:21 | 0:10:41 | smithi | main | centos | 9.stream | orch:cephadm/smoke-roleless/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-services/nfs 3-final} | 2 | |
Failure Reason:
"2024-03-26T21:57:51.164031+0000 mon.smithi098 (mon.0) 785 : cluster [WRN] Health check failed: Failed to place 1 daemon(s) (CEPHADM_DAEMON_PLACE_FAIL)" in cluster log |
||||||||||||||
fail | 7624089 | 2024-03-26 20:32:05 | 2024-03-26 20:37:52 | 2024-03-26 21:35:22 | 0:57:30 | 0:47:31 | 0:09:59 | smithi | main | centos | 9.stream | orch:cephadm/upgrade/{1-start-distro/1-start-centos_9.stream 2-repo_digest/defaut 3-upgrade/staggered 4-wait 5-upgrade-ls agent/off mon_election/classic} | 2 | |
Failure Reason:
"2024-03-26T20:55:56.782307+0000 mon.a (mon.0) 478 : cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log |