Name | Machine Type | Up | Locked | Locked Since | Locked By | OS Type | OS Version | Arch | Description |
---|---|---|---|---|---|---|---|---|---|
smithi110.front.sepia.ceph.com | smithi | True | True | 2024-04-26 10:59:28.230575 | scheduled_leonidus@teuthology | centos | 9 | x86_64 | /home/teuthworker/archive/leonidus-2024-04-26_10:54:14-fs-wip-lusov-quiescer-fixes-distro-default-smithi/7674683 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
running | 7674683 | 2024-04-26 10:55:04 | 2024-04-26 10:58:58 | 2024-04-26 15:58:55 | 5:01:30 | smithi | main | centos | 9.stream | fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/secure wsync/no} objectstore-ec/bluestore-comp omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/automatic export-check n/3 replication/default} standby-replay tasks/{0-subvolume/{with-no-extra-options} 1-check-counter 2-scrub/no 3-snaps/yes 4-flush/yes 5-quiesce/with-quiesce 6-workunit/postgres}} | 3 | |||
pass | 7674654 | 2024-04-26 07:24:01 | 2024-04-26 07:42:32 | 2024-04-26 08:18:09 | 0:35:37 | 0:24:48 | 0:10:49 | smithi | main | ubuntu | 22.04 | rgw/multifs/{clusters/fixed-2 frontend/beast ignore-pg-availability objectstore/bluestore-bitmap overrides rgw_pool_type/replicated s3tests-branch tasks/rgw_s3tests ubuntu_latest} | 2 | |
fail | 7674575 | 2024-04-26 02:09:12 | 2024-04-26 04:26:33 | 2024-04-26 07:34:55 | 3:08:22 | 3:00:43 | 0:07:39 | smithi | main | centos | 9.stream | upgrade/reef-x/stress-split/{0-distro/centos_9.stream_runc 0-roles 1-start 2-first-half-tasks/readwrite 3-stress-tasks/{radosbench rbd-cls rbd-import-export rbd_api readwrite snaps-few-objects} 4-second-half-tasks/radosbench mon_election/connectivity} | 2 | |
Failure Reason:
"2024-04-26T05:00:00.000164+0000 mon.a (mon.0) 649 : cluster 3 [WRN] OSDMAP_FLAGS: noscrub flag(s) set" in cluster log |
||||||||||||||
pass | 7674431 | 2024-04-26 01:28:20 | 2024-04-26 03:18:11 | 2024-04-26 04:27:17 | 1:09:06 | 1:03:21 | 0:05:45 | smithi | main | centos | 9.stream | rados/standalone/{supported-random-distro$/{centos_latest} workloads/misc} | 1 | |
pass | 7674372 | 2024-04-26 01:27:17 | 2024-04-26 02:48:23 | 2024-04-26 03:17:53 | 0:29:30 | 0:20:03 | 0:09:27 | smithi | main | ubuntu | 22.04 | rados/singleton/{all/osd-backfill mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-comp-zstd rados supported-random-distro$/{ubuntu_latest}} | 1 | |
pass | 7674338 | 2024-04-26 01:26:40 | 2024-04-26 02:31:28 | 2024-04-26 02:48:18 | 0:16:50 | 0:10:36 | 0:06:14 | smithi | main | centos | 9.stream | rados/cephadm/smoke-singlehost/{0-random-distro$/{centos_9.stream_runc} 1-start 2-services/basic 3-final} | 1 | |
fail | 7674254 | 2024-04-26 01:03:35 | 2024-04-26 01:21:54 | 2024-04-26 02:17:04 | 0:55:10 | 0:45:11 | 0:09:59 | smithi | main | centos | 9.stream | fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/secure wsync/no} objectstore-ec/bluestore-comp omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/random export-check n/5 replication/always} standby-replay tasks/{0-subvolume/{with-namespace-isolated} 1-check-counter 2-scrub/no 3-snaps/yes 4-flush/yes 5-quiesce/with-quiesce 6-workunit/fs/test_o_trunc}} | 3 | |
Failure Reason:
"2024-04-26T01:50:00.000295+0000 mon.a (mon.0) 1036 : cluster [WRN] fs cephfs has 3 MDS online, but wants 5" in cluster log |
||||||||||||||
pass | 7674121 | 2024-04-25 22:33:16 | 2024-04-26 10:13:09 | 2024-04-26 10:59:23 | 0:46:14 | 0:39:34 | 0:06:40 | smithi | main | rhel | 8.6 | powercycle/osd/{clusters/3osd-1per-target ignorelist_health objectstore/bluestore-hybrid powercycle/default supported-all-distro/rhel_8 tasks/rados_api_tests thrashosds-health} | 4 | |
pass | 7674098 | 2024-04-25 22:32:53 | 2024-04-26 09:46:26 | 2024-04-26 10:13:44 | 0:27:18 | 0:16:40 | 0:10:38 | smithi | main | ubuntu | 22.04 | powercycle/osd/{clusters/3osd-1per-target ignorelist_health objectstore/bluestore-low-osd-mem-target powercycle/default supported-all-distro/ubuntu_latest tasks/admin_socket_objecter_requests thrashosds-health} | 4 | |
fail | 7674073 | 2024-04-25 21:33:03 | 2024-04-25 22:38:47 | 2024-04-25 22:57:26 | 0:18:39 | 0:11:10 | 0:07:29 | smithi | main | centos | 9.stream | powercycle/osd/{clusters/3osd-1per-target ignorelist_health objectstore/bluestore-low-osd-mem-target powercycle/default supported-distros/centos_latest tasks/cfuse_workunit_suites_fsx thrashosds-health} | 4 | |
Failure Reason:
Command failed (workunit test suites/fsx.sh) on smithi052 with status 2: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=b22e2ebdeb24376882b7bda2a7329c8cccc2276a TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/fsx.sh' |
||||||||||||||
pass | 7674061 | 2024-04-25 21:32:51 | 2024-04-25 22:22:50 | 2024-04-25 22:40:00 | 0:17:10 | 0:10:19 | 0:06:51 | smithi | main | centos | 9.stream | powercycle/osd/{clusters/3osd-1per-target ignorelist_health objectstore/bluestore-comp-snappy powercycle/default supported-distros/centos_latest tasks/cfuse_workunit_suites_pjd thrashosds-health} | 4 | |
pass | 7674047 | 2024-04-25 21:32:37 | 2024-04-25 22:01:21 | 2024-04-25 22:23:03 | 0:21:42 | 0:11:27 | 0:10:15 | smithi | main | centos | 9.stream | powercycle/osd/{clusters/3osd-1per-target ignorelist_health objectstore/bluestore-comp-zstd powercycle/default supported-distros/centos_latest tasks/cfuse_workunit_suites_pjd thrashosds-health} | 4 | |
pass | 7674033 | 2024-04-25 21:32:23 | 2024-04-25 21:42:42 | 2024-04-25 22:03:05 | 0:20:23 | 0:10:26 | 0:09:57 | smithi | main | centos | 9.stream | powercycle/osd/{clusters/3osd-1per-target ignorelist_health objectstore/bluestore-low-osd-mem-target powercycle/default supported-distros/centos_latest tasks/cfuse_workunit_suites_pjd thrashosds-health} | 4 | |
pass | 7673967 | 2024-04-25 21:05:28 | 2024-04-26 00:44:32 | 2024-04-26 01:21:40 | 0:37:08 | 0:26:17 | 0:10:51 | smithi | main | ubuntu | 22.04 | rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/fastclose objectstore/bluestore-comp-lz4 rados recovery-overrides/{more-async-recovery} supported-random-distro$/{ubuntu_latest} thrashers/none thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} | 2 | |
pass | 7673924 | 2024-04-25 21:04:42 | 2024-04-26 00:24:33 | 2024-04-26 00:45:51 | 0:21:18 | 0:14:54 | 0:06:24 | smithi | main | centos | 9.stream | rados/cephadm/workunits/{0-distro/centos_9.stream_runc agent/off mon_election/connectivity task/test_host_drain} | 3 | |
pass | 7673874 | 2024-04-25 21:03:51 | 2024-04-25 23:58:37 | 2024-04-26 00:24:32 | 0:25:55 | 0:13:53 | 0:12:02 | smithi | main | ubuntu | 22.04 | rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/connectivity msgr-failures/fastclose objectstore/bluestore-low-osd-mem-target rados recovery-overrides/{more-async-recovery} supported-random-distro$/{ubuntu_latest} thrashers/careful thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} | 4 | |
pass | 7673839 | 2024-04-25 21:03:17 | 2024-04-25 23:33:20 | 2024-04-26 00:00:26 | 0:27:06 | 0:14:31 | 0:12:35 | smithi | main | ubuntu | 22.04 | rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/classic msgr-failures/osd-dispatch-delay objectstore/bluestore-hybrid rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{ubuntu_latest} thrashers/default thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} | 4 | |
pass | 7673797 | 2024-04-25 21:02:37 | 2024-04-25 23:09:10 | 2024-04-25 23:34:20 | 0:25:10 | 0:18:56 | 0:06:14 | smithi | main | centos | 9.stream | rados/thrash-erasure-code-overwrites/{bluestore-bitmap ceph clusters/{fixed-2 openstack} fast/fast mon_election/classic msgr-failures/osd-dispatch-delay rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{centos_latest} thrashers/careful thrashosds-health workloads/ec-small-objects-overwrites} | 2 | |
pass | 7673743 | 2024-04-25 21:01:47 | 2024-04-25 21:20:46 | 2024-04-25 21:44:04 | 0:23:18 | 0:16:53 | 0:06:25 | smithi | main | centos | 9.stream | rados/singleton-nomsgr/{all/multi-backfill-reject mon_election/classic rados supported-random-distro$/{centos_latest}} | 2 | |
fail | 7673642 | 2024-04-25 20:00:39 | 2024-04-25 20:18:35 | 2024-04-25 21:07:30 | 0:48:55 | 0:41:49 | 0:07:06 | smithi | main | centos | 9.stream | fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/1a5s-mds-1c-client conf/{client mds mgr mon osd} distro/{centos_latest} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/stock/{centos_9.stream k-stock} ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-ec-root overrides/{client-shutdown frag ignorelist_health ignorelist_wrongly_marked_down pg_health prefetch_dirfrags/no prefetch_dirfrags/yes prefetch_entire_dirfrags/no prefetch_entire_dirfrags/yes races session_timeout thrashosds-health} ranks/1 tasks/{1-thrash/with-quiesce 2-workunit/fs/snaps}} | 2 | |
Failure Reason:
Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi110 with status 135: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=e221b808d285b9d02e54cc10c53c84188e8e41e3 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/fs/snaps/snaptest-git-ceph.sh' |