Name Machine Type Up Locked Locked Since Locked By OS Type OS Version Arch Description
smithi110.front.sepia.ceph.com smithi True True 2024-03-04 20:55:03.130979 scheduled_yuriw@teuthology centos 8 x86_64 /home/teuthworker/archive/yuriw-2024-03-04_20:52:58-rados-reef-release-distro-default-smithi/7581479
Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
running 7581479 2024-03-04 20:55:02 2024-03-04 20:55:03 2024-03-04 22:17:15 1:23:50 smithi main centos 8.stream rados/dashboard/{0-single-container-host debug/mgr mon_election/classic random-objectstore$/{bluestore-stupid} tasks/dashboard} 2
pass 7581387 2024-03-04 17:14:32 2024-03-04 17:14:32 2024-03-04 17:42:03 0:27:31 0:14:40 0:12:51 smithi main ubuntu 22.04 rgw/multifs/{clusters/fixed-2 frontend/beast ignore-pg-availability objectstore/bluestore-bitmap overrides rgw_pool_type/ec-profile s3tests-branch tasks/rgw_user_quota ubuntu_latest} 2
pass 7581327 2024-03-04 15:43:31 2024-03-04 16:26:38 2024-03-04 16:49:50 0:23:12 0:12:49 0:10:23 smithi main ubuntu 22.04 rgw/multifs/{clusters/fixed-2 frontend/beast ignore-pg-availability objectstore/bluestore-bitmap overrides rgw_pool_type/replicated s3tests-branch tasks/rgw_user_quota ubuntu_latest} 2
pass 7581274 2024-03-04 15:42:48 2024-03-04 15:56:20 2024-03-04 16:26:34 0:30:14 0:21:37 0:08:37 smithi main ubuntu 22.04 rgw/website/{clusters/fixed-2 frontend/beast http ignore-pg-availability overrides s3tests-branch tasks/s3tests-website ubuntu_latest} 2
pass 7581228 2024-03-04 15:08:50 2024-03-04 15:32:26 2024-03-04 15:56:26 0:24:00 0:14:41 0:09:19 smithi main centos 9.stream orch:cephadm/smoke/{0-distro/centos_9.stream_runc 0-nvme-loop agent/off fixed-2 mon_election/connectivity start} 2
pass 7581202 2024-03-04 15:08:25 2024-03-04 15:09:10 2024-03-04 15:33:21 0:24:11 0:14:21 0:09:50 smithi main centos 9.stream orch:cephadm/workunits/{0-distro/centos_9.stream_runc agent/on mon_election/connectivity task/test_extra_daemon_features} 2
fail 7581152 2024-03-04 09:24:14 2024-03-04 09:26:50 2024-03-04 10:22:21 0:55:31 0:44:49 0:10:42 smithi main centos 9.stream fs:upgrade:mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mon osd} fail_fs/no overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn syntax} roles tasks/{0-from/quincy 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-inline/no 4-verify} 2-client/fuse 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} 2
Failure Reason:

reached maximum tries (51) after waiting for 300 seconds

fail 7581061 2024-03-04 08:29:22 2024-03-04 12:02:40 2024-03-04 13:16:44 1:14:04 1:02:30 0:11:34 smithi main centos 9.stream rgw/verify/{0-install clusters/fixed-2 datacache/rgw-datacache frontend/beast ignore-pg-availability inline-data$/{on} msgr-failures/few objectstore/bluestore-bitmap overrides proto/https rgw_pool_type/replicated s3tests-branch sharding$/{default} striping$/{stripe-greater-than-chunk} supported-random-distro$/{centos_latest} tasks/{bucket-check cls mp_reupload ragweed reshard s3tests-java s3tests versioning} validater/valgrind} 2
Failure Reason:

Command failed (s3 tests against rgw) on smithi110 with status 1: "source /home/ubuntu/cephtest/tox-venv/bin/activate && cd /home/ubuntu/cephtest/s3-tests-client.0 && S3TEST_CONF=/home/ubuntu/cephtest/archive/s3-tests.client.0.conf BOTO_CONFIG=/home/ubuntu/cephtest/boto-client.0.cfg REQUESTS_CA_BUNDLE=/etc/pki/tls/certs/ca-bundle.crt tox -- -v -m 'not fails_on_rgw and not lifecycle_expiration and not lifecycle_transition and not cloud_transition and not test_of_sts and not webidentity_test and not fails_with_subdomain and not sse_s3'"

pass 7580990 2024-03-04 08:28:46 2024-03-04 11:07:59 2024-03-04 12:01:24 0:53:25 0:43:18 0:10:07 smithi main centos 9.stream fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{centos_latest} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/stock/{centos_9.stream k-stock} ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-comp-ec-root overrides/{client-shutdown frag ignorelist_health ignorelist_wrongly_marked_down prefetch_dirfrags/yes prefetch_entire_dirfrags/yes races session_timeout thrashosds-health} ranks/1 tasks/{1-thrash/mon 2-workunit/suites/ffsb}} 2
pass 7580944 2024-03-04 08:28:09 2024-03-04 10:22:15 2024-03-04 11:07:58 0:45:43 0:23:52 0:21:51 smithi main ubuntu 22.04 fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} subvol_versions/create_subvol_version_v1 tasks/mds-full} 2
pass 7580811 2024-03-04 07:07:03 2024-03-04 08:33:00 2024-03-04 09:25:54 0:52:54 0:44:36 0:08:18 smithi main centos 9.stream fs/cephadm/multivolume/{0-start 1-mount 2-workload/dbench distro/single-container-host} 2
fail 7580752 2024-03-04 07:06:17 2024-03-04 07:46:40 2024-03-04 08:19:37 0:32:57 0:22:27 0:10:30 smithi main centos 9.stream fs/multifs/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-2c-client conf/{client mds mon osd} distro/{centos_latest} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/stock/{centos_9.stream k-stock} ms-die-on-skipped}} objectstore-ec/bluestore-comp overrides/{ignorelist_health ignorelist_wrongly_marked_down mon-debug} tasks/failover} 2
Failure Reason:

Test failure: test_shrink (tasks.cephfs.test_failover.TestClusterResize)

fail 7580704 2024-03-04 07:05:40 2024-03-04 07:06:25 2024-03-04 07:36:09 0:29:44 0:13:45 0:15:59 smithi main centos 9.stream fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{centos_latest} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} msgr-failures/none objectstore-ec/bluestore-ec-root overrides/{client-shutdown frag ignorelist_health ignorelist_wrongly_marked_down prefetch_dirfrags/yes prefetch_entire_dirfrags/no races session_timeout thrashosds-health} ranks/3 tasks/{1-thrash/mds 2-workunit/fs/trivial_sync}} 2
Failure Reason:

SELinux denials found on ubuntu@smithi110.front.sepia.ceph.com: ['type=AVC msg=audit(1709536849.067:198): avc: denied { checkpoint_restore } for pid=1176 comm="agetty" capability=40 scontext=system_u:system_r:getty_t:s0-s0:c0.c1023 tcontext=system_u:system_r:getty_t:s0-s0:c0.c1023 tclass=capability2 permissive=1']

pass 7580599 2024-03-03 15:56:43 2024-03-03 15:56:43 2024-03-03 16:24:39 0:27:56 0:15:19 0:12:37 smithi main ubuntu 22.04 rgw/singleton/{all/radosgw-admin frontend/beast ignore-pg-availability objectstore/bluestore-bitmap overrides rgw_pool_type/replicated supported-random-distro$/{ubuntu_latest}} 2
dead 7580534 2024-03-03 14:47:43 2024-03-03 14:48:18 2024-03-03 15:08:37 0:20:19 smithi main ubuntu 22.04 rgw/multifs/{clusters/fixed-2 frontend/beast ignore-pg-availability objectstore/bluestore-bitmap overrides rgw_pool_type/ec s3tests-branch tasks/rgw_bucket_quota ubuntu_latest} 2
Failure Reason:

Error reimaging machines: reached maximum tries (101) after waiting for 600 seconds

fail 7580243 2024-03-02 21:20:17 2024-03-02 22:39:35 2024-03-02 23:52:13 1:12:38 1:01:30 0:11:08 smithi main ubuntu 22.04 orch:cephadm/with-work/{0-distro/ubuntu_22.04 fixed-2 mode/packaged mon_election/classic msgr/async start tasks/rados_api_tests} 2
Failure Reason:

"2024-03-02T23:20:00.000237+0000 mon.a (mon.0) 2564 : cluster 3 [WRN] CACHE_POOL_NEAR_FULL: 1 cache pools at or near target size" in cluster log

fail 7580161 2024-03-02 17:52:10 2024-03-03 09:02:35 2024-03-03 09:32:22 0:29:47 0:23:09 0:06:38 smithi main rhel 8.6 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} ms_mode/crc wsync/no} objectstore-ec/bluestore-ec-root omap_limit/10 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{balancer/automatic export-check n/5 replication/always} standby-replay tasks/{0-subvolume/{with-no-extra-options} 1-check-counter 2-scrub/no 3-snaps/yes 4-flush/yes 5-workunit/fs/misc}} 3
Failure Reason:

'Filesystem' object has no attribute 'run_ceph_cmd'

fail 7580107 2024-03-02 17:51:12 2024-03-03 08:16:16 2024-03-03 08:53:15 0:36:59 0:25:29 0:11:30 smithi main rhel 8.6 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/legacy wsync/yes} objectstore-ec/bluestore-bitmap omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{balancer/automatic export-check n/5 replication/default} standby-replay tasks/{0-subvolume/{with-namespace-isolated} 1-check-counter 2-scrub/no 3-snaps/yes 4-flush/no 5-workunit/fs/misc}} 3
Failure Reason:

'Filesystem' object has no attribute 'run_ceph_cmd'

fail 7580070 2024-03-02 17:50:32 2024-03-03 07:45:17 2024-03-03 08:16:39 0:31:22 0:24:31 0:06:51 smithi main rhel 8.6 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/fuse objectstore-ec/bluestore-comp omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{balancer/random export-check n/3 replication/always} standby-replay tasks/{0-subvolume/{with-no-extra-options} 1-check-counter 2-scrub/no 3-snaps/yes 4-flush/no 5-workunit/suites/ffsb}} 3
Failure Reason:

'Filesystem' object has no attribute 'run_ceph_cmd'

fail 7580027 2024-03-02 17:49:48 2024-03-03 07:05:58 2024-03-03 07:36:42 0:30:44 0:22:29 0:08:15 smithi main rhel 8.6 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/fuse objectstore-ec/bluestore-comp omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{balancer/random export-check n/5 replication/default} standby-replay tasks/{0-subvolume/{with-quota} 1-check-counter 2-scrub/no 3-snaps/no 4-flush/no 5-workunit/suites/fsx}} 3
Failure Reason:

'Filesystem' object has no attribute 'run_ceph_cmd'