Name | Machine Type | Up | Locked | Locked Since | Locked By | OS Type | OS Version | Arch | Description |
---|---|---|---|---|---|---|---|---|---|
smithi189.front.sepia.ceph.com | smithi | True | False | ubuntu | 22.04 | x86_64 | /home/teuthworker/archive/teuthology-2024-04-25_20:32:15-powercycle-main-distro-default-smithi/7673712 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
dead | 7674896 | 2024-04-26 17:25:23 | 2024-04-26 17:32:23 | 2024-04-27 05:48:02 | 12:15:39 | smithi | main | centos | 9.stream | fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/stock/{centos_9.stream k-stock} ms-die-on-skipped}} ms_mode/secure wsync/no} objectstore-ec/bluestore-comp omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/automatic export-check n/3 replication/default} standby-replay tasks/{0-subvolume/{with-namespace-isolated-and-quota} 1-check-counter 2-scrub/no 3-snaps/yes 4-flush/yes 5-quiesce/with-quiesce 6-workunit/postgres}} | 3 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 7674871 | 2024-04-26 15:09:00 | 2024-04-26 16:18:39 | 2024-04-26 16:56:03 | 0:37:24 | 0:30:35 | 0:06:49 | smithi | main | centos | 9.stream | orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mgr mon osd} fail_fs/no overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn pg_health syntax} roles tasks/{0-from/quincy 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-inline/no 4-verify} 2-client/kclient 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} | 2 | |
pass | 7674844 | 2024-04-26 15:08:19 | 2024-04-26 16:00:56 | 2024-04-26 16:18:56 | 0:18:00 | 0:10:30 | 0:07:30 | smithi | main | centos | 9.stream | orch:cephadm/smb/{0-distro/centos_9.stream tasks/deploy_smb_mgr_basic} | 2 | |
pass | 7674813 | 2024-04-26 15:07:33 | 2024-04-26 15:39:52 | 2024-04-26 16:01:50 | 0:21:58 | 0:12:46 | 0:09:12 | smithi | main | ubuntu | 22.04 | orch:cephadm/no-agent-workunits/{0-distro/ubuntu_22.04 mon_election/classic task/test_adoption} | 1 | |
pass | 7674784 | 2024-04-26 15:06:50 | 2024-04-26 15:09:07 | 2024-04-26 15:39:53 | 0:30:46 | 0:22:34 | 0:08:12 | smithi | main | centos | 9.stream | orch:cephadm/with-work/{0-distro/centos_9.stream_runc fixed-2 mode/root mon_election/connectivity msgr/async start tasks/rados_python} | 2 | |
fail | 7674561 | 2024-04-26 02:08:58 | 2024-04-26 04:20:16 | 2024-04-26 07:45:39 | 3:25:23 | 3:19:27 | 0:05:56 | smithi | main | centos | 9.stream | upgrade/reef-x/stress-split/{0-distro/centos_9.stream 0-roles 1-start 2-first-half-tasks/radosbench 3-stress-tasks/{radosbench rbd-cls rbd-import-export rbd_api readwrite snaps-few-objects} 4-second-half-tasks/radosbench mon_election/connectivity} | 2 | |
Failure Reason:
"2024-04-26T06:20:00.000174+0000 mon.a (mon.0) 6980 : cluster 4 [ERR] OSD_SCRUB_ERRORS: 5 scrub errors" in cluster log |
||||||||||||||
fail | 7674498 | 2024-04-26 01:29:35 | 2024-04-26 03:52:23 | 2024-04-26 04:16:42 | 0:24:19 | 0:16:07 | 0:08:12 | smithi | main | centos | 9.stream | rados/cephadm/workunits/{0-distro/centos_9.stream_runc agent/on mon_election/connectivity task/test_set_mon_crush_locations} | 3 | |
Failure Reason:
"2024-04-26T04:10:00.000216+0000 mon.a (mon.0) 436 : cluster 3 [WRN] CEPHADM_FAILED_DAEMON: 1 failed cephadm daemon(s)" in cluster log |
||||||||||||||
pass | 7674472 | 2024-04-26 01:29:04 | 2024-04-26 03:38:20 | 2024-04-26 03:53:25 | 0:15:05 | 0:07:54 | 0:07:11 | smithi | main | centos | 9.stream | rados/singleton/{all/test-noautoscale-flag mon_election/classic msgr-failures/many msgr/async-v1only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{centos_latest}} | 1 | |
pass | 7674396 | 2024-04-26 01:27:42 | 2024-04-26 02:59:45 | 2024-04-26 03:38:16 | 0:38:31 | 0:27:30 | 0:11:01 | smithi | main | ubuntu | 22.04 | rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/connectivity msgr-failures/osd-delay objectstore/bluestore-comp-snappy rados recovery-overrides/{more-async-recovery} supported-random-distro$/{ubuntu_latest} thrashers/pggrow thrashosds-health workloads/ec-rados-plugin=jerasure-k=4-m=2} | 3 | |
pass | 7674192 | 2024-04-25 22:34:28 | 2024-04-26 11:40:00 | 2024-04-26 13:02:48 | 1:22:48 | 1:09:16 | 0:13:32 | smithi | main | ubuntu | 20.04 | powercycle/osd/{clusters/3osd-1per-target ignorelist_health objectstore/bluestore-comp-zlib powercycle/default supported-all-distro/ubuntu_20.04 tasks/radosbench thrashosds-health} | 4 | |
pass | 7674168 | 2024-04-25 22:34:03 | 2024-04-26 11:13:36 | 2024-04-26 11:42:15 | 0:28:39 | 0:17:07 | 0:11:32 | smithi | main | ubuntu | 22.04 | powercycle/osd/{clusters/3osd-1per-target ignorelist_health objectstore/bluestore-comp-zlib powercycle/default supported-all-distro/ubuntu_latest tasks/admin_socket_objecter_requests thrashosds-health} | 4 | |
pass | 7674128 | 2024-04-25 22:33:23 | 2024-04-26 10:27:24 | 2024-04-26 11:14:06 | 0:46:42 | 0:36:20 | 0:10:22 | smithi | main | ubuntu | 22.04 | powercycle/osd/{clusters/3osd-1per-target ignorelist_health objectstore/bluestore-comp-zstd powercycle/default supported-all-distro/ubuntu_latest tasks/cfuse_workunit_misc thrashosds-health} | 4 | |
pass | 7673712 | 2024-04-25 20:34:45 | 2024-04-27 08:44:16 | 2024-04-27 09:39:18 | 0:55:02 | 0:44:30 | 0:10:32 | smithi | main | ubuntu | 22.04 | powercycle/osd/{clusters/3osd-1per-target ignorelist_health objectstore/bluestore-comp-zlib powercycle/default supported-distros/ubuntu_latest tasks/cfuse_workunit_kernel_untar_build thrashosds-health} | 4 | |
pass | 7673685 | 2024-04-25 20:34:19 | 2024-04-27 08:16:48 | 2024-04-27 08:45:20 | 0:28:32 | 0:19:59 | 0:08:33 | smithi | main | centos | 9.stream | powercycle/osd/{clusters/3osd-1per-target ignorelist_health objectstore/bluestore-bitmap powercycle/default supported-distros/centos_latest tasks/cfuse_workunit_misc thrashosds-health} | 4 | |
dead | 7673533 | 2024-04-25 14:50:04 | 2024-04-25 14:51:49 | 2024-04-26 03:01:54 | 12:10:05 | smithi | main | centos | 9.stream | rgw:notifications/{beast bluestore-bitmap fixed-2 ignore-pg-availability overrides tasks/amqp/{0-install centos_latest test_amqp}} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 7673503 | 2024-04-25 13:51:43 | 2024-04-25 14:21:04 | 2024-04-25 14:51:20 | 0:30:16 | 0:19:00 | 0:11:16 | smithi | main | ubuntu | 22.04 | fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/1a5s-mds-1c-client conf/{client mds mgr mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-ec-root overrides/{client-shutdown frag ignorelist_health ignorelist_wrongly_marked_down pg_health prefetch_dirfrags/no prefetch_dirfrags/yes prefetch_entire_dirfrags/no prefetch_entire_dirfrags/yes races session_timeout thrashosds-health} ranks/3 tasks/{1-thrash/with-quiesce 2-workunit/suites/iozone}} | 2 | |
fail | 7673434 | 2024-04-25 12:13:50 | 2024-04-25 12:16:22 | 2024-04-25 13:03:35 | 0:47:13 | 0:37:26 | 0:09:47 | smithi | main | centos | 9.stream | rbd:nvmeof/{base/install centos_latest conf/{disable-pool-app} workloads/nvmeof_thrash} | 4 | |
Failure Reason:
"2024-04-25T12:51:28.584478+0000 mon.a (mon.0) 33 : cluster [WRN] Health detail: HEALTH_WARN 2 failed cephadm daemon(s)" in cluster log |
||||||||||||||
fail | 7673175 | 2024-04-25 10:02:46 | 2024-04-25 13:19:34 | 2024-04-25 14:08:16 | 0:48:42 | 0:25:38 | 0:23:04 | smithi | main | centos | 9.stream | fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/stock/{centos_9.stream k-stock} ms-die-on-skipped}} ms_mode/crc wsync/no} objectstore-ec/bluestore-bitmap omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/automatic export-check n/5 replication/default} standby-replay tasks/{0-subvolume/{with-namespace-isolated-and-quota} 1-check-counter 2-scrub/yes 3-snaps/no 4-flush/yes 5-workunit/suites/fsx}} | 3 | |
Failure Reason:
error during scrub thrashing: rank damage found: {'backtrace'} |
||||||||||||||
pass | 7673153 | 2024-04-25 10:02:23 | 2024-04-25 13:04:03 | 2024-04-25 13:29:53 | 0:25:50 | 0:13:57 | 0:11:53 | smithi | main | ubuntu | 22.04 | fs/volumes/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/1a3s-mds-4c-client conf/{client mds mgr mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile pg_health} tasks/volumes/{overrides test/finisher_per_module}} | 2 | |
fail | 7673102 | 2024-04-25 10:01:30 | 2024-04-25 10:37:11 | 2024-04-25 12:04:32 | 1:27:21 | 1:17:23 | 0:09:58 | smithi | main | centos | 9.stream | fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/stock/{centos_9.stream k-stock} ms-die-on-skipped}} ms_mode/secure wsync/no} objectstore-ec/bluestore-comp omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/random export-check n/5 replication/default} standby-replay tasks/{0-subvolume/{with-namespace-isolated-and-quota} 1-check-counter 2-scrub/yes 3-snaps/yes 4-flush/yes 5-workunit/kernel_untar_build}} | 3 | |
Failure Reason:
error during scrub thrashing: Command failed on smithi027 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph tell mds.1:4 damage ls' |