Name Machine Type Up Locked Locked Since Locked By OS Type OS Version Arch Description
smithi153.front.sepia.ceph.com smithi True True 2024-04-16 22:32:37.671715 scheduled_teuthology@teuthology centos 9 x86_64 /home/teuthworker/archive/teuthology-2024-04-16_21:16:13-rbd-squid-distro-default-smithi/7658779
Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
running 7658779 2024-04-16 21:17:34 2024-04-16 22:32:17 2024-04-16 23:09:55 0:38:44 smithi main centos 9.stream rbd/mirror/{base/install clients/{mirror-extra mirror} cluster/{2-node openstack} conf/{disable-pool-app} msgr-failures/few objectstore/bluestore-comp-zstd supported-random-distro$/{centos_latest} workloads/rbd-mirror-snapshot-workunit-exclusive-lock} 2
pass 7658725 2024-04-16 21:03:07 2024-04-16 21:38:07 2024-04-16 22:32:37 0:54:30 0:41:20 0:13:10 smithi main centos 9.stream fs:functional/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/1a3s-mds-4c-client conf/{client mds mgr mon osd} distro/{centos_latest} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/stock/{centos_9.stream k-stock} ms-die-on-skipped}} objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile pg_health} subvol_versions/create_subvol_version_v2 tasks/quiesce} 2
pass 7658542 2024-04-16 20:19:06 2024-04-16 20:56:32 2024-04-16 21:40:59 0:44:27 0:32:48 0:11:39 smithi main centos 9.stream rbd/device/{base/install cluster/{fixed-3 openstack} conf/{disable-pool-app} msgr-failures/few objectstore/bluestore-low-osd-mem-target supported-random-distro$/{centos_latest} thrashers/default thrashosds-health workloads/diff-continuous-nbd} 3
pass 7658487 2024-04-16 20:18:14 2024-04-16 20:19:41 2024-04-16 20:57:28 0:37:47 0:22:10 0:15:37 smithi main centos 9.stream rbd/thrash/{base/install clusters/{fixed-2 openstack} conf/{disable-pool-app} msgr-failures/few objectstore/bluestore-low-osd-mem-target supported-random-distro$/{centos_latest} thrashers/default thrashosds-health workloads/rbd_fsx_journal} 2
pass 7658310 2024-04-16 12:54:52 2024-04-16 14:42:06 2024-04-16 15:56:39 1:14:33 1:01:29 0:13:04 smithi main ubuntu 22.04 orch:cephadm/with-work/{0-distro/ubuntu_22.04 fixed-2 mode/packaged mon_election/classic msgr/async start tasks/rados_api_tests} 2
pass 7658288 2024-04-16 12:54:27 2024-04-16 14:15:55 2024-04-16 14:44:37 0:28:42 0:15:44 0:12:58 smithi main centos 9.stream orch:cephadm/smoke-roleless/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-services/nfs-haproxy-proto 3-final} 2
pass 7658235 2024-04-16 12:53:31 2024-04-16 13:22:06 2024-04-16 14:19:20 0:57:14 0:48:53 0:08:21 smithi main centos 9.stream orch:cephadm/upgrade/{1-start-distro/1-start-centos_9.stream 2-repo_digest/defaut 3-upgrade/staggered 4-wait 5-upgrade-ls agent/off mon_election/classic} 2
fail 7658216 2024-04-16 12:38:53 2024-04-16 15:55:53 2024-04-16 16:17:57 0:22:04 0:10:45 0:11:19 smithi main centos 9.stream fs/thrash/multifs/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/1a3s-mds-2c-client conf/{client mds mgr mon osd} distro/{centos_latest} mount/fuse msgr-failures/none objectstore/bluestore-bitmap overrides/{client-shutdown frag ignorelist_health ignorelist_wrongly_marked_down multifs pg_health session_timeout thrashosds-health} tasks/{1-thrash/mds 2-workunit/cfuse_workunit_suites_fsstress}} 2
Failure Reason:

Command failed on smithi053 with status 1: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-mds -f --cluster ceph -i a'

fail 7658135 2024-04-16 12:37:49 2024-04-16 12:38:43 2024-04-16 13:09:44 0:31:01 0:16:40 0:14:21 smithi main centos 9.stream fs/multifs/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/1a3s-mds-2c-client conf/{client mds mgr mon osd} distro/{centos_latest} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/stock/{centos_9.stream k-stock} ms-die-on-skipped}} objectstore-ec/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down mon-debug pg_health} tasks/multifs-auth} 2
Failure Reason:

Test failure: test_r_with_no_fsname_and_path_in_cap (tasks.cephfs.test_multifs_auth.TestMDSCaps)

fail 7658098 2024-04-16 11:23:44 2024-04-16 11:24:25 2024-04-16 11:49:41 0:25:16 0:13:37 0:11:39 smithi main centos 9.stream fs:functional/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/1a3s-mds-4c-client conf/{client mds mgr mon osd} distro/{centos_latest} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile pg_health} subvol_versions/create_subvol_version_v2 tasks/admin} 2
Failure Reason:

Test failure: test_per_client_labeled_perf_counters_io (tasks.cephfs.test_admin.TestLabeledPerfCounters), test_per_client_labeled_perf_counters_io (tasks.cephfs.test_admin.TestLabeledPerfCounters)

fail 7657975 2024-04-16 07:22:32 2024-04-16 07:23:12 2024-04-16 07:46:57 0:23:45 0:13:42 0:10:03 smithi main centos 9.stream crimson-rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore thrashers/default thrashosds-health workloads/write_fadvise_dontneed} 2
Failure Reason:

Command crashed: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --write-fadvise-dontneed --max-ops 4000 --objects 500 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op read 100 --op write 50 --op delete 10 --op write_excl 50 --pool unique_pool_0'

pass 7657930 2024-04-16 05:42:26 2024-04-16 05:43:24 2024-04-16 06:37:54 0:54:30 0:43:19 0:11:11 smithi main centos 9.stream fs/functional/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/1a3s-mds-4c-client conf/{client mds mgr mon osd} distro/{centos_latest} mount/fuse objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile pg_health} subvol_versions/create_subvol_version_v2 tasks/quiesce} 2
pass 7657889 2024-04-16 05:01:36 2024-04-16 05:01:45 2024-04-16 05:36:17 0:34:32 0:21:51 0:12:41 smithi main centos 9.stream smoke/basic/{clusters/{fixed-3-cephfs openstack} objectstore/bluestore-bitmap supported-random-distro$/{centos_latest} tasks/{0-install test/rados_workunit_loadgen_mix}} 3
pass 7657768 2024-04-15 22:10:11 2024-04-16 00:37:11 2024-04-16 01:32:44 0:55:33 0:44:57 0:10:36 smithi main centos 8.stream orch/cephadm/mgr-nfs-upgrade/{0-centos_8.stream_container_tools 1-bootstrap/16.2.0 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
pass 7657730 2024-04-15 22:09:34 2024-04-16 00:09:17 2024-04-16 00:37:10 0:27:53 0:16:32 0:11:21 smithi main centos 8.stream orch/cephadm/smoke-roleless/{0-distro/centos_8.stream_container_tools_crun 0-nvme-loop 1-start 2-services/jaeger 3-final} 2
pass 7657688 2024-04-15 22:08:53 2024-04-15 23:41:53 2024-04-16 00:09:29 0:27:36 0:20:34 0:07:02 smithi main rhel 8.6 orch/cephadm/smoke-roleless/{0-distro/rhel_8.6_container_tools_3.0 0-nvme-loop 1-start 2-services/nfs-keepalive-only 3-final} 2
pass 7657643 2024-04-15 21:32:45 2024-04-15 23:03:20 2024-04-15 23:41:48 0:38:28 0:26:39 0:11:49 smithi main centos 9.stream powercycle/osd/{clusters/3osd-1per-target ignorelist_health objectstore/bluestore-stupid powercycle/default supported-distros/centos_latest tasks/cfuse_workunit_suites_ffsb thrashosds-health} 4
fail 7657609 2024-04-15 21:11:38 2024-04-15 22:32:20 2024-04-15 22:57:35 0:25:15 0:11:38 0:13:37 smithi main ubuntu 22.04 orch/cephadm/no-agent-workunits/{0-distro/ubuntu_22.04 mon_election/connectivity task/test_orch_cli_mon} 5
Failure Reason:

Command failed on smithi033 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:a9a752df26c63acad72e1b3569fd79a515ca0765 pull'

fail 7657546 2024-04-15 21:10:33 2024-04-15 22:01:17 2024-04-15 22:21:11 0:19:54 0:07:27 0:12:27 smithi main centos 9.stream orch/cephadm/workunits/{0-distro/centos_9.stream agent/off mon_election/classic task/test_monitoring_stack_basic} 3
Failure Reason:

Command failed on smithi033 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:a9a752df26c63acad72e1b3569fd79a515ca0765 pull'

fail 7657474 2024-04-15 21:09:21 2024-04-15 21:28:26 2024-04-15 21:46:52 0:18:26 0:07:30 0:10:56 smithi main ubuntu 22.04 orch/cephadm/osds/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-ops/rm-zap-flag} 2
Failure Reason:

Command failed on smithi008 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:a9a752df26c63acad72e1b3569fd79a515ca0765 pull'