Name Machine Type Up Locked Locked Since Locked By OS Type OS Version Arch Description
smithi176.front.sepia.ceph.com smithi True True 2024-05-14 14:17:59.428630 scheduled_vshankar@teuthology x86_64 /home/teuthworker/archive/vshankar-2024-05-14_07:04:04-fs-wip-vshankar-testing-20240509.053109-debug-testing-default-smithi/7705868
Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
pass 7705906 2024-05-14 13:37:16 2024-05-14 13:54:35 2024-05-14 14:18:31 0:23:56 0:12:11 0:11:45 smithi main centos 9.stream fs:functional:/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/1a3s-mds-4c-client conf/{client mds mgr mon osd} distro/{centos_latest} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/stock/{centos_9.stream k-stock} ms-die-on-skipped}} objectstore/bluestore-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile pg_health} subvol_versions/create_subvol_version_v2 tasks/sessionmap} 2
waiting 7705868 2024-05-14 07:07:19 2024-05-14 14:17:49 2024-05-14 14:18:00 0:02:28 0:02:28 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/stock/{centos_9.stream k-stock} ms-die-on-skipped}} ms_mode/legacy wsync/yes} objectstore-ec/bluestore-comp-ec-root omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/automatic export-check n/3 replication/always} standby-replay tasks/{0-subvolume/{with-namespace-isolated} 1-check-counter 2-scrub/yes 3-snaps/yes 4-flush/yes 5-quiesce/no 6-workunit/fs/misc}} 3
pass 7705829 2024-05-14 07:06:48 2024-05-14 13:08:50 2024-05-14 13:57:19 0:48:29 0:37:50 0:10:39 smithi main centos 9.stream fs/mixed-clients/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/1a3s-mds-2c-client conf/{client mds mgr mon osd} distro/{centos_latest} kclient-overrides/{distro/testing/k-testing ms-die-on-skipped} objectstore-ec/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health} tasks/kernel_cfuse_workunits_untarbuild_blogbench} 2
pass 7705812 2024-05-14 07:06:34 2024-05-14 12:45:10 2024-05-14 13:10:08 0:24:58 0:13:43 0:11:15 smithi main centos 9.stream fs/functional/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/1a3s-mds-4c-client conf/{client mds mgr mon osd} distro/{centos_latest} mount/fuse objectstore/bluestore-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile pg_health} subvol_versions/create_subvol_version_v2 tasks/recovery-fs} 2
fail 7705395 2024-05-14 00:32:01 2024-05-14 00:57:59 2024-05-14 12:42:20 11:44:21 11:29:15 0:15:06 smithi main ubuntu 22.04 fs/functional/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/1a3s-mds-4c-client conf/{client mds mgr mon osd} distro/{ubuntu_latest} mount/fuse objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile pg_health} subvol_versions/create_subvol_version_v1 tasks/xfstests-dev} 2
Failure Reason:

Test failure: test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev)

pass 7705153 2024-05-13 21:33:05 2024-05-13 23:59:49 2024-05-14 00:30:00 0:30:11 0:14:48 0:15:23 smithi main ubuntu 22.04 powercycle/osd/{clusters/3osd-1per-target ignorelist_health objectstore/bluestore-comp-lz4 powercycle/default supported-distros/ubuntu_latest tasks/cfuse_workunit_suites_truncate_delay thrashosds-health} 4
pass 7705104 2024-05-13 21:32:17 2024-05-13 23:06:10 2024-05-14 00:06:03 0:59:53 0:46:31 0:13:22 smithi main centos 9.stream powercycle/osd/{clusters/3osd-1per-target ignorelist_health objectstore/bluestore-bitmap powercycle/default supported-distros/centos_latest tasks/cfuse_workunit_kernel_untar_build thrashosds-health} 4
pass 7705029 2024-05-13 21:10:33 2024-05-13 22:21:08 2024-05-13 23:08:50 0:47:42 0:34:53 0:12:49 smithi main centos 9.stream orch/cephadm/mgr-nfs-upgrade/{0-centos_9.stream 1-bootstrap/17.2.0 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
pass 7704975 2024-05-13 21:09:42 2024-05-13 21:46:42 2024-05-13 22:20:42 0:34:00 0:15:53 0:18:07 smithi main centos 9.stream orch/cephadm/smoke-roleless/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-services/nfs-keepalive-only 3-final} 2
pass 7704936 2024-05-13 21:09:05 2024-05-13 21:31:24 2024-05-13 21:55:23 0:23:59 0:13:48 0:10:11 smithi main centos 9.stream orch/cephadm/orchestrator_cli/{0-random-distro$/{centos_9.stream_runc} 2-node-mgr agent/off orchestrator_cli} 2
pass 7704699 2024-05-13 18:34:01 2024-05-13 18:39:26 2024-05-13 19:15:23 0:35:57 0:24:34 0:11:23 smithi main ubuntu 22.04 krbd:fsx/{ceph/ceph clusters/3-node conf features/object-map ms_mode$/{secure} objectstore/bluestore-bitmap striping/default/{msgr-failures/many randomized-striping-off} tasks/fsx-1-client} 3
fail 7704675 2024-05-13 07:44:05 2024-05-13 08:59:42 2024-05-13 09:22:55 0:23:13 0:13:36 0:09:37 smithi main centos 9.stream orch:cephadm/smoke-roleless/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-services/nvmeof 3-final} 2
Failure Reason:

Command failed on smithi022 with status 28: 'curl --silent -L https://4.chacra.ceph.com/binaries/ceph/wip-guits-testing-2024-05-07-1127/27b8a84bdbbc98c0a93f5c419d52fad9a786cdf8/centos/9/x86_64/flavors/default/cephadm > /home/ubuntu/cephtest/cephadm && ls -l /home/ubuntu/cephtest/cephadm'

pass 7704639 2024-05-13 07:43:16 2024-05-13 08:34:55 2024-05-13 08:59:50 0:24:55 0:16:29 0:08:26 smithi main centos 9.stream orch:cephadm/smoke-roleless/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-services/nfs-haproxy-proto 3-final} 2
pass 7704518 2024-05-13 00:32:24 2024-05-13 03:00:16 2024-05-13 04:32:39 1:32:23 1:24:26 0:07:57 smithi main rhel 8.6 upgrade:pacific-x/parallel/{0-random-distro$/{rhel_8.6_container_tools_rhel8} 0-start 1-tasks mon_election/connectivity upgrade-sequence workload/{ec-rados-default rados_api rados_loadgenbig rbd_import_export test_rbd_api test_rbd_python}} 2
pass 7704415 2024-05-12 22:05:49 2024-05-13 15:04:11 2024-05-13 15:47:16 0:43:05 0:36:48 0:06:17 smithi main rhel 8.6 rados/mgr/{clusters/{2-node-mgr} debug/mgr mgr_ttl_cache/disable mon_election/classic random-objectstore$/{bluestore-comp-snappy} supported-random-distro$/{rhel_8} tasks/progress} 2
fail 7704359 2024-05-12 22:04:53 2024-05-13 14:40:37 2024-05-13 15:00:56 0:20:19 0:09:23 0:10:56 smithi main ubuntu 22.04 rados/monthrash/{ceph clusters/9-mons mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-snappy rados supported-random-distro$/{ubuntu_latest} thrashers/many workloads/pool-create-delete} 2
Failure Reason:

HTTPSConnectionPool(host='4.chacra.ceph.com', port=443): Max retries exceeded with url: /repos/ceph/reef/b806bdbddfddd976c2919d3cca5c05faad473799/ubuntu/jammy/flavors/default/repo (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7fa99117c6d0>: Failed to establish a new connection: [Errno 110] Connection timed out'))

pass 7704283 2024-05-12 22:03:36 2024-05-13 14:03:42 2024-05-13 14:41:13 0:37:31 0:26:23 0:11:08 smithi main centos 8.stream rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/connectivity msgr-failures/fastclose objectstore/bluestore-hybrid rados recovery-overrides/{default} supported-random-distro$/{centos_8} thrashers/careful thrashosds-health workloads/ec-rados-plugin=jerasure-k=4-m=2} 3
pass 7704210 2024-05-12 22:02:21 2024-05-13 13:30:50 2024-05-13 14:05:57 0:35:07 0:28:32 0:06:35 smithi main rhel 8.6 rados/cephadm/workunits/{0-distro/rhel_8.6_container_tools_3.0 agent/off mon_election/classic task/test_set_mon_crush_locations} 3
fail 7704164 2024-05-12 22:01:35 2024-05-13 13:01:38 2024-05-13 13:21:05 0:19:27 0:09:18 0:10:09 smithi main ubuntu 22.04 rados/perf/{ceph mon_election/classic objectstore/bluestore-stupid openstack scheduler/dmclock_default_shards settings/optimized ubuntu_latest workloads/radosbench_4K_rand_read} 1
Failure Reason:

HTTPSConnectionPool(host='4.chacra.ceph.com', port=443): Max retries exceeded with url: /repos/ceph/reef/b806bdbddfddd976c2919d3cca5c05faad473799/ubuntu/jammy/flavors/default/repo (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7f2ef1c8edf0>: Failed to establish a new connection: [Errno 110] Connection timed out'))

fail 7704022 2024-05-12 21:26:33 2024-05-12 22:50:15 2024-05-13 00:55:52 2:05:37 1:56:35 0:09:02 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/stock/{centos_9.stream k-stock} ms-die-on-skipped}} ms_mode/crc wsync/no} objectstore-ec/bluestore-ec-root omap_limit/10 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/automatic export-check n/3 replication/always} standby-replay tasks/{0-subvolume/{with-namespace-isolated} 1-check-counter 2-scrub/yes 3-snaps/yes 4-flush/yes 5-workunit/kernel_untar_build}} 3
Failure Reason:

error during scrub thrashing: rank damage found: {'backtrace'}