Name | Machine Type | Up | Locked | Locked Since | Locked By | OS Type | OS Version | Arch | Description |
---|---|---|---|---|---|---|---|---|---|
smithi151.front.sepia.ceph.com | smithi | True | True | 2024-04-23 16:49:50.749601 | scheduled_yuriw@teuthology | x86_64 | /home/teuthworker/archive/yuriw-2024-04-23_14:14:08-rados-wip-yuri3-testing-2024-04-05-0825-distro-default-smithi/7669848 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
waiting | 7669848 | 2024-04-23 14:19:51 | 2024-04-23 16:49:50 | 2024-04-23 16:49:51 | 0:05:26 | 0:05:26 | smithi | main | ubuntu | 22.04 | rados/thrash-erasure-code-overwrites/{bluestore-bitmap ceph clusters/{fixed-4 openstack} fast/normal mon_election/connectivity msgr-failures/fastclose rados recovery-overrides/{more-async-recovery} supported-random-distro$/{ubuntu_latest} thrashers/default thrashosds-health workloads/ec-small-objects-fast-read-overwrites} | 4 | ||
fail | 7669788 | 2024-04-23 14:18:48 | 2024-04-23 16:18:54 | 2024-04-23 16:40:13 | 0:21:19 | 0:10:11 | 0:11:08 | smithi | main | ubuntu | 22.04 | rados/singleton-nomsgr/{all/lazy_omap_stats_output mon_election/connectivity rados supported-random-distro$/{ubuntu_latest}} | 1 | |
Failure Reason:
Command crashed: 'sudo TESTDIR=/home/ubuntu/cephtest bash -c ceph_test_lazy_omap_stats' |
||||||||||||||
fail | 7669733 | 2024-04-23 14:17:49 | 2024-04-23 15:51:18 | 2024-04-23 16:05:26 | 0:14:08 | 0:05:25 | 0:08:43 | smithi | main | centos | 9.stream | rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/connectivity msgr-failures/few objectstore/bluestore-bitmap rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{centos_latest} thrashers/default thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} | 4 | |
Failure Reason:
Command failed on smithi151 with status 1: 'sudo yum -y install ceph-mgr-dashboard' |
||||||||||||||
pass | 7669673 | 2024-04-23 14:16:46 | 2024-04-23 15:16:50 | 2024-04-23 15:51:58 | 0:35:08 | 0:24:35 | 0:10:33 | smithi | main | ubuntu | 22.04 | rados/thrash-erasure-code/{ceph clusters/{fixed-4 openstack} fast/fast mon_election/classic msgr-failures/osd-delay objectstore/bluestore-bitmap rados recovery-overrides/{more-async-recovery} supported-random-distro$/{ubuntu_latest} thrashers/pggrow thrashosds-health workloads/ec-small-objects-balanced} | 4 | |
fail | 7669639 | 2024-04-23 14:16:10 | 2024-04-23 15:01:55 | 2024-04-23 15:15:46 | 0:13:51 | 0:05:15 | 0:08:36 | smithi | main | centos | 9.stream | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-1} backoff/peering_and_degraded ceph clusters/{fixed-4 openstack} crc-failures/bad_map_crc_failure d-balancer/upmap-read mon_election/classic msgr-failures/fastclose msgr/async objectstore/bluestore-comp-zstd rados supported-random-distro$/{centos_latest} thrashers/mapgap thrashosds-health workloads/dedup-io-mixed} | 4 | |
Failure Reason:
Command failed on smithi191 with status 1: 'sudo yum -y install ceph-mgr-dashboard' |
||||||||||||||
fail | 7669626 | 2024-04-23 14:15:56 | 2024-04-23 14:45:31 | 2024-04-23 14:58:49 | 0:13:18 | 0:04:57 | 0:08:21 | smithi | main | centos | 9.stream | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-1} backoff/normal ceph clusters/{fixed-4 openstack} crc-failures/bad_map_crc_failure d-balancer/on mon_election/classic msgr-failures/osd-delay msgr/async-v1only objectstore/bluestore-comp-snappy rados supported-random-distro$/{centos_latest} thrashers/careful thrashosds-health workloads/cache-snaps} | 4 | |
Failure Reason:
Command failed on smithi162 with status 1: 'sudo yum -y install ceph-mgr-dashboard' |
||||||||||||||
fail | 7669587 | 2024-04-23 14:05:08 | 2024-04-23 14:05:42 | 2024-04-23 14:36:41 | 0:30:59 | 0:15:59 | 0:15:00 | smithi | main | ubuntu | 22.04 | fs/functional/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/1a3s-mds-4c-client conf/{client mds mgr mon osd} distro/{ubuntu_latest} mount/fuse objectstore/bluestore-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile pg_health} subvol_versions/create_subvol_version_v2 tasks/pool-perm} | 2 | |
Failure Reason:
Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm) |
||||||||||||||
pass | 7669528 | 2024-04-23 09:50:14 | 2024-04-23 09:51:18 | 2024-04-23 10:41:48 | 0:50:30 | 0:39:31 | 0:10:59 | smithi | main | centos | 9.stream | fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/stock/{centos_9.stream k-stock} ms-die-on-skipped}} ms_mode/crc wsync/no} objectstore-ec/bluestore-bitmap omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/1 standby-replay tasks/{0-subvolume/{with-namespace-isolated} 1-check-counter 2-scrub/no 3-snaps/yes 4-flush/yes 5-quiesce/with-quiesce 6-workunit/suites/iozone}} | 3 | |
pass | 7669491 | 2024-04-23 05:01:39 | 2024-04-23 05:01:39 | 2024-04-23 05:29:30 | 0:27:51 | 0:18:40 | 0:09:11 | smithi | main | centos | 9.stream | smoke/basic/{clusters/{fixed-3-cephfs openstack} objectstore/bluestore-bitmap supported-random-distro$/{centos_latest} tasks/{0-install test/rgw_ec_s3tests}} | 3 | |
fail | 7669229 | 2024-04-22 22:46:43 | 2024-04-22 23:18:03 | 2024-04-22 23:40:46 | 0:22:43 | 0:10:58 | 0:11:45 | smithi | main | centos | 9.stream | orch:cephadm/smoke-small/{0-distro/centos_9.stream_runc 0-nvme-loop agent/on fixed-2 mon_election/classic start} | 3 | |
Failure Reason:
"2024-04-22T23:38:09.641011+0000 mon.a (mon.0) 509 : cluster [WRN] Health check failed: 1 failed cephadm daemon(s) ['daemon osd.2 on smithi151 is in unknown state'] (CEPHADM_FAILED_DAEMON)" in cluster log |
||||||||||||||
pass | 7669131 | 2024-04-22 22:10:53 | 2024-04-23 02:00:58 | 2024-04-23 02:46:10 | 0:45:12 | 0:35:18 | 0:09:54 | smithi | main | centos | 8.stream | orch/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} fail_fs/no overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn syntax} roles tasks/{0-from/quincy 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-inline/no 4-verify} 2-client/kclient 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} | 2 | |
fail | 7669030 | 2024-04-22 22:09:18 | 2024-04-23 00:50:09 | 2024-04-23 01:59:03 | 1:08:54 | 0:57:20 | 0:11:34 | smithi | main | centos | 8.stream | orch/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} fail_fs/yes overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn syntax} roles tasks/{0-from/quincy 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-inline/yes 4-verify} 2-client/fuse 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} | 2 | |
Failure Reason:
reached maximum tries (51) after waiting for 300 seconds |
||||||||||||||
pass | 7668978 | 2024-04-22 21:33:08 | 2024-04-23 00:14:37 | 2024-04-23 00:51:34 | 0:36:57 | 0:28:09 | 0:08:48 | smithi | main | centos | 9.stream | powercycle/osd/{clusters/3osd-1per-target ignorelist_health objectstore/bluestore-comp-snappy powercycle/default supported-distros/centos_latest tasks/rados_api_tests thrashosds-health} | 4 | |
pass | 7668962 | 2024-04-22 21:32:52 | 2024-04-22 23:54:40 | 2024-04-23 00:15:08 | 0:20:28 | 0:12:04 | 0:08:24 | smithi | main | centos | 9.stream | powercycle/osd/{clusters/3osd-1per-target ignorelist_health objectstore/bluestore-comp-snappy powercycle/default supported-distros/centos_latest tasks/cfuse_workunit_suites_pjd thrashosds-health} | 4 | |
pass | 7668936 | 2024-04-22 21:32:27 | 2024-04-22 22:37:29 | 2024-04-22 23:23:26 | 0:45:57 | 0:36:53 | 0:09:04 | smithi | main | centos | 9.stream | powercycle/osd/{clusters/3osd-1per-target ignorelist_health objectstore/bluestore-bitmap powercycle/default supported-distros/centos_latest tasks/rados_api_tests thrashosds-health} | 4 | |
fail | 7668881 | 2024-04-22 21:11:07 | 2024-04-22 22:09:44 | 2024-04-22 22:36:52 | 0:27:08 | 0:10:12 | 0:16:56 | smithi | main | ubuntu | 22.04 | orch/cephadm/smb/{0-distro/ubuntu_22.04 tasks/deploy_smb_basic} | 2 | |
Failure Reason:
Command failed on smithi012 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:430e09df97c8fc7dc2b2ae424f68ed11366c540f pull' |
||||||||||||||
pass | 7668691 | 2024-04-22 20:12:40 | 2024-04-23 02:46:03 | 2024-04-23 03:11:22 | 0:25:19 | 0:18:04 | 0:07:15 | smithi | main | centos | 9.stream | orch/cephadm/smoke-roleless/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-services/rgw-ingress 3-final} | 2 | |
pass | 7668653 | 2024-04-22 20:12:03 | 2024-04-22 21:07:05 | 2024-04-22 22:16:45 | 1:09:40 | 0:57:46 | 0:11:54 | smithi | main | ubuntu | 22.04 | orch/cephadm/upgrade/{1-start-distro/1-start-ubuntu_22.04 2-repo_digest/defaut 3-upgrade/staggered 4-wait 5-upgrade-ls agent/off mon_election/classic} | 2 | |
pass | 7668601 | 2024-04-22 20:11:13 | 2024-04-22 20:34:41 | 2024-04-22 21:08:49 | 0:34:08 | 0:24:59 | 0:09:09 | smithi | main | ubuntu | 22.04 | orch/cephadm/smoke/{0-distro/ubuntu_22.04 0-nvme-loop agent/off fixed-2 mon_election/connectivity start} | 2 | |
pass | 7668565 | 2024-04-22 20:10:40 | 2024-04-22 20:12:04 | 2024-04-22 20:34:51 | 0:22:47 | 0:13:54 | 0:08:53 | smithi | main | centos | 9.stream | orch/cephadm/workunits/{0-distro/centos_9.stream agent/off mon_election/connectivity task/test_extra_daemon_features} | 2 |