Name | Machine Type | Up | Locked | Locked Since | Locked By | OS Type | OS Version | Arch | Description |
---|---|---|---|---|---|---|---|---|---|
smithi045.front.sepia.ceph.com | smithi | True | True | 2024-04-26 17:00:49.481203 | scheduled_cbodley@teuthology | ubuntu | 22.04 | x86_64 | /home/teuthworker/archive/cbodley-2024-04-26_16:59:32-rgw-wip-cbodley2-testing-distro-default-smithi/7674884 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
running | 7674884 | 2024-04-26 16:59:37 | 2024-04-26 16:59:49 | 2024-04-26 19:02:24 | 2:02:52 | smithi | main | ubuntu | 22.04 | rgw/verify/{0-install accounts$/{main-tenant} clusters/fixed-2 datacache/rgw-datacache frontend/beast ignore-pg-availability inline-data$/{off} msgr-failures/few objectstore/bluestore-bitmap overrides proto/https rgw_pool_type/ec s3tests-branch sharding$/{single} striping$/{stripe-greater-than-chunk} supported-random-distro$/{ubuntu_latest} tasks/{bucket-check cls mp_reupload ragweed reshard s3tests-java s3tests versioning} validater/valgrind} | 2 | |||
fail | 7674828 | 2024-04-26 15:07:55 | 2024-04-26 15:52:09 | 2024-04-26 16:53:36 | 1:01:27 | 0:51:47 | 0:09:40 | smithi | main | centos | 9.stream | orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mgr mon osd} fail_fs/yes overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn pg_health syntax} roles tasks/{0-from/reef/{v18.2.1} 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-inline/yes 4-verify} 2-client/fuse 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} | 2 | |
Failure Reason:
reached maximum tries (51) after waiting for 300 seconds |
||||||||||||||
pass | 7674806 | 2024-04-26 15:07:23 | 2024-04-26 15:33:58 | 2024-04-26 15:54:28 | 0:20:30 | 0:13:29 | 0:07:01 | smithi | main | centos | 9.stream | orch:cephadm/smoke-roleless/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-services/nfs2 3-final} | 2 | |
pass | 7674702 | 2024-04-26 12:10:53 | 2024-04-26 12:19:09 | 2024-04-26 15:06:17 | 2:47:08 | 2:40:47 | 0:06:21 | smithi | main | centos | 9.stream | rgw/tools/{centos_latest cluster ignore-pg-availability tasks} | 1 | |
pass | 7674676 | 2024-04-26 07:24:20 | 2024-04-26 07:58:08 | 2024-04-26 08:21:48 | 0:23:40 | 0:13:03 | 0:10:37 | smithi | main | ubuntu | 22.04 | rgw/multifs/{clusters/fixed-2 frontend/beast ignore-pg-availability objectstore/bluestore-bitmap overrides rgw_pool_type/replicated s3tests-branch tasks/rgw_user_quota ubuntu_latest} | 2 | |
fail | 7674567 | 2024-04-26 02:09:04 | 2024-04-26 04:21:49 | 2024-04-26 07:57:15 | 3:35:26 | 3:23:22 | 0:12:04 | smithi | main | ubuntu | 22.04 | upgrade/reef-x/stress-split/{0-distro/ubuntu_22.04 0-roles 1-start 2-first-half-tasks/rbd-import-export 3-stress-tasks/{radosbench rbd-cls rbd-import-export rbd_api readwrite snaps-few-objects} 4-second-half-tasks/radosbench mon_election/connectivity} | 2 | |
Failure Reason:
"2024-04-26T04:46:17.648865+0000 mon.a (mon.0) 179 : cluster 3 [WRN] MON_DOWN: 1/3 mons down, quorum a,b" in cluster log |
||||||||||||||
pass | 7674495 | 2024-04-26 01:29:31 | 2024-04-26 03:51:11 | 2024-04-26 04:23:11 | 0:32:00 | 0:21:51 | 0:10:09 | smithi | main | ubuntu | 22.04 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-active-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-5} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/upmap-read mon_election/classic msgr-failures/fastclose msgr/async objectstore/bluestore-bitmap rados supported-random-distro$/{ubuntu_latest} thrashers/mapgap thrashosds-health workloads/small-objects-localized} | 2 | |
pass | 7674449 | 2024-04-26 01:28:40 | 2024-04-26 03:26:09 | 2024-04-26 03:51:28 | 0:25:19 | 0:18:27 | 0:06:52 | smithi | main | centos | 9.stream | rados/mgr/{clusters/{2-node-mgr} debug/mgr mgr_ttl_cache/enable mon_election/connectivity random-objectstore$/{bluestore-low-osd-mem-target} supported-random-distro$/{centos_latest} tasks/progress} | 2 | |
pass | 7674371 | 2024-04-26 01:27:16 | 2024-04-26 02:47:13 | 2024-04-26 03:26:13 | 0:39:00 | 0:27:49 | 0:11:11 | smithi | main | ubuntu | 22.04 | rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/fast mon_election/classic msgr-failures/fastclose objectstore/bluestore-hybrid rados recovery-overrides/{more-async-recovery} supported-random-distro$/{ubuntu_latest} thrashers/pggrow thrashosds-health workloads/ec-rados-plugin=jerasure-k=3-m=1} | 2 | |
pass | 7674305 | 2024-04-26 01:26:05 | 2024-04-26 02:12:16 | 2024-04-26 02:48:20 | 0:36:04 | 0:26:39 | 0:09:25 | smithi | main | ubuntu | 22.04 | rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/fast mon_election/classic msgr-failures/osd-delay objectstore/bluestore-comp-zlib rados recovery-overrides/{more-async-recovery} supported-random-distro$/{ubuntu_latest} thrashers/minsize_recovery thrashosds-health workloads/ec-rados-plugin=clay-k=4-m=2} | 2 | |
pass | 7674259 | 2024-04-26 01:03:36 | 2024-04-26 01:26:57 | 2024-04-26 02:12:36 | 0:45:39 | 0:36:18 | 0:09:21 | smithi | main | centos | 9.stream | fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/crc wsync/no} objectstore-ec/bluestore-bitmap omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/1 standby-replay tasks/{0-subvolume/{with-namespace-isolated-and-quota} 1-check-counter 2-scrub/no 3-snaps/yes 4-flush/yes 5-quiesce/with-quiesce 6-workunit/suites/iozone}} | 3 | |
pass | 7674196 | 2024-04-25 22:34:32 | 2024-04-26 11:48:53 | 2024-04-26 12:19:17 | 0:30:24 | 0:21:47 | 0:08:37 | smithi | main | rhel | 8.6 | powercycle/osd/{clusters/3osd-1per-target ignorelist_health objectstore/bluestore-bitmap powercycle/default supported-all-distro/rhel_8 tasks/admin_socket_objecter_requests thrashosds-health} | 4 | |
pass | 7674175 | 2024-04-25 22:34:10 | 2024-04-26 11:26:11 | 2024-04-26 11:50:05 | 0:23:54 | 0:13:31 | 0:10:23 | smithi | main | centos | 9.stream | powercycle/osd/{clusters/3osd-1per-target ignorelist_health objectstore/bluestore-comp-snappy powercycle/default supported-all-distro/centos_latest tasks/cfuse_workunit_suites_pjd thrashosds-health} | 4 | |
pass | 7674108 | 2024-04-25 22:33:03 | 2024-04-26 10:01:52 | 2024-04-26 11:28:48 | 1:26:56 | 1:15:04 | 0:11:52 | smithi | main | ubuntu | 22.04 | powercycle/osd/{clusters/3osd-1per-target ignorelist_health objectstore/bluestore-bitmap powercycle/default supported-all-distro/ubuntu_latest tasks/radosbench thrashosds-health} | 4 | |
fail | 7674059 | 2024-04-25 21:32:49 | 2024-04-25 22:19:39 | 2024-04-25 22:40:26 | 0:20:47 | 0:11:56 | 0:08:51 | smithi | main | centos | 9.stream | powercycle/osd/{clusters/3osd-1per-target ignorelist_health objectstore/bluestore-bitmap powercycle/default supported-distros/centos_latest tasks/cfuse_workunit_suites_fsx thrashosds-health} | 4 | |
Failure Reason:
Command failed (workunit test suites/fsx.sh) on smithi045 with status 2: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=b22e2ebdeb24376882b7bda2a7329c8cccc2276a TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/fsx.sh' |
||||||||||||||
pass | 7674044 | 2024-04-25 21:32:34 | 2024-04-25 21:51:38 | 2024-04-25 22:21:17 | 0:29:39 | 0:16:33 | 0:13:06 | smithi | main | ubuntu | 22.04 | powercycle/osd/{clusters/3osd-1per-target ignorelist_health objectstore/bluestore-comp-lz4 powercycle/default supported-distros/ubuntu_latest tasks/cfuse_workunit_suites_fsstress thrashosds-health} | 4 | |
pass | 7674000 | 2024-04-25 21:06:02 | 2024-04-26 01:02:48 | 2024-04-26 01:27:23 | 0:24:35 | 0:14:20 | 0:10:15 | smithi | main | ubuntu | 22.04 | rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/connectivity msgr-failures/osd-dispatch-delay objectstore/bluestore-comp-snappy rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{ubuntu_latest} thrashers/careful thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} | 4 | |
pass | 7673966 | 2024-04-25 21:05:27 | 2024-04-26 00:43:42 | 2024-04-26 01:03:41 | 0:19:59 | 0:13:14 | 0:06:45 | smithi | main | centos | 9.stream | rados/cephadm/workunits/{0-distro/centos_9.stream_runc agent/on mon_election/classic task/test_rgw_multisite} | 3 | |
pass | 7673875 | 2024-04-25 21:03:52 | 2024-04-26 00:00:28 | 2024-04-26 00:44:29 | 0:44:01 | 0:34:26 | 0:09:35 | smithi | main | ubuntu | 22.04 | rados/monthrash/{ceph clusters/3-mons mon_election/connectivity msgr-failures/mon-delay msgr/async-v2only objectstore/bluestore-comp-snappy rados supported-random-distro$/{ubuntu_latest} thrashers/many workloads/snaps-few-objects} | 2 | |
pass | 7673768 | 2024-04-25 21:02:10 | 2024-04-25 22:51:07 | 2024-04-26 00:00:48 | 1:09:41 | 0:28:53 | 0:40:48 | smithi | main | centos | 8.stream | rados/thrash-old-clients/{0-distro$/{centos_8.stream_container_tools} 0-size-min-size-overrides/3-size-2-min-size 1-install/octopus backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/on mon_election/connectivity msgr-failures/fastclose rados thrashers/pggrow thrashosds-health workloads/test_rbd_api} | 3 |