Name Machine Type Up Locked Locked Since Locked By OS Type OS Version Arch Description
smithi067.front.sepia.ceph.com smithi True True 2024-05-10 11:08:50.385995 scheduled_dparmar@teuthology centos 9 x86_64 /home/teuthworker/archive/dparmar-2024-05-10_07:38:09-fs-wip-63896-from-wip-rishabh-testing-20240501.193033-distro-default-smithi/7700944
Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
running 7700944 2024-05-10 07:38:44 2024-05-10 11:08:40 2024-05-10 13:24:16 2:16:22 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/legacy wsync/no} objectstore-ec/bluestore-comp-ec-root omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/random export-check n/5 replication/always} standby-replay tasks/{0-subvolume/{with-quota} 1-check-counter 2-scrub/yes 3-snaps/yes 4-flush/no 5-quiesce/no 6-workunit/fs/misc}} 3
fail 7700803 2024-05-10 05:14:45 2024-05-10 07:15:42 2024-05-10 07:34:07 0:18:25 0:06:23 0:12:02 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/stock/{centos_9.stream k-stock} ms-die-on-skipped}} ms_mode/secure wsync/no} objectstore-ec/bluestore-ec-root omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/automatic export-check n/3 replication/default} standby-replay tasks/{0-subvolume/{with-quota} 1-check-counter 2-scrub/no 3-snaps/no 4-flush/yes 5-quiesce/no 6-workunit/suites/iozone}} 3
Failure Reason:

Command failed on smithi067 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:25eb20a061356442d4a1c711818ce2e5848c382d pull'

fail 7700740 2024-05-10 05:13:24 2024-05-10 06:34:39 2024-05-10 07:15:56 0:41:17 0:25:36 0:15:41 smithi main centos 9.stream fs/upgrade/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mgr mon osd} fail_fs/no overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn pg_health syntax} roles tasks/{0-from/reef/{v18.2.0} 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-inline/no 4-verify} 2-client/kclient 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} 2
Failure Reason:

Command failed on smithi067 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 97475f14-0e9a-11ef-bc98-c7b262605968 -e sha1=25eb20a061356442d4a1c711818ce2e5848c382d -- bash -c \'ceph versions | jq -e \'"\'"\'.overall | keys\'"\'"\' | grep $sha1\''

pass 7700566 2024-05-10 02:36:46 2024-05-10 04:26:00 2024-05-10 06:38:38 2:12:38 2:02:45 0:09:53 smithi main rhel 8.6 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{rhel_8} mount/fuse msgr-failures/osd-mds-delay objectstore-ec/bluestore-bitmap overrides/{client-shutdown frag ignorelist_health ignorelist_wrongly_marked_down prefetch_dirfrags/yes prefetch_entire_dirfrags/no races session_timeout thrashosds-health} ranks/3 tasks/{1-thrash/mds 2-workunit/fs/snaps}} 2
pass 7700507 2024-05-10 02:35:46 2024-05-10 03:37:49 2024-05-10 04:29:06 0:51:17 0:38:38 0:12:39 smithi main rhel 8.6 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/legacy wsync/no} objectstore-ec/bluestore-comp omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{balancer/random export-check n/3 replication/always} standby-replay tasks/{0-subvolume/{with-namespace-isolated} 1-check-counter 2-scrub/no 3-snaps/yes 4-flush/no 5-workunit/suites/pjd}} 3
pass 7700432 2024-05-10 02:34:30 2024-05-10 02:37:50 2024-05-10 03:37:58 1:00:08 0:38:02 0:22:06 smithi main centos 8.stream fs/upgrade/featureful_client/old_client/{bluestore-bitmap centos_8 clusters/1-mds-2-client-micro conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down multimds/no pg-warn} tasks/{0-octopus 1-client 2-upgrade 3-compat_client/quincy}} 3
fail 7700405 2024-05-10 02:09:04 2024-05-10 08:02:55 2024-05-10 11:03:52 3:00:57 2:44:19 0:16:38 smithi main centos 9.stream upgrade/quincy-x/stress-split/{0-distro/centos_9.stream_runc 0-roles 1-start 2-first-half-tasks/rbd-import-export 3-stress-tasks/{radosbench rbd-cls rbd-import-export rbd_api readwrite snaps-few-objects} 4-second-half-tasks/radosbench mon_election/connectivity} 2
Failure Reason:

"1715329863.398539 mon.a (mon.0) 661 : cluster [WRN] Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)" in cluster log

pass 7700366 2024-05-10 02:08:27 2024-05-10 07:46:37 2024-05-10 08:07:44 0:21:07 0:10:13 0:10:54 smithi main ubuntu 22.04 upgrade/quincy-x/filestore-remove-check/{0-cluster/{openstack start} 1-ceph-install/quincy 2 - upgrade objectstore/filestore-xfs ubuntu_latest} 1
pass 7699756 2024-05-09 19:24:31 2024-05-09 19:41:41 2024-05-09 20:38:03 0:56:22 0:45:10 0:11:12 smithi main ubuntu 22.04 rgw:verify/{0-install accounts$/{main-tenant} clusters/fixed-2 datacache/rgw-datacache frontend/beast ignore-pg-availability inline-data$/{off} msgr-failures/few objectstore/bluestore-bitmap overrides proto/http rgw_pool_type/ec-profile s3tests-branch sharding$/{single} striping$/{stripe-equals-chunk} supported-random-distro$/{ubuntu_latest} tasks/{bucket-check cls mp_reupload ragweed reshard s3tests-java s3tests versioning} validater/lockdep} 2
dead 7699754 2024-05-09 19:24:29 2024-05-09 19:37:49 2024-05-09 19:41:34 0:03:45 smithi main centos 9.stream rgw:verify/{0-install accounts$/{main-tenant} clusters/fixed-2 datacache/no_datacache frontend/beast ignore-pg-availability inline-data$/{on} msgr-failures/few objectstore/bluestore-bitmap overrides proto/https rgw_pool_type/ec s3tests-branch sharding$/{default} striping$/{stripe-greater-than-chunk} supported-random-distro$/{centos_latest} tasks/{bucket-check cls mp_reupload ragweed reshard s3tests-java s3tests versioning} validater/lockdep} 2
Failure Reason:

Error reimaging machines: Failed to power on smithi079

fail 7699684 2024-05-09 10:23:32 2024-05-09 10:35:47 2024-05-09 11:00:52 0:25:05 0:14:24 0:10:41 smithi main centos 9.stream fs:volumes/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{centos_latest} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/volumes/{overrides test/misc}} 2
Failure Reason:

Test failure: test_dangling_symlink (tasks.cephfs.test_volumes.TestMisc)

fail 7699607 2024-05-09 09:15:30 2024-05-09 09:18:50 2024-05-09 10:22:36 1:03:46 0:53:48 0:09:58 smithi main ubuntu 22.04 fs/verify/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/1a5s-mds-1c-client conf/{client mds mgr mon osd} distro/{ubuntu/{overrides ubuntu_latest}} mount/fuse objectstore-ec/bluestore-comp-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down mon-debug pg_health session_timeout} ranks/3 tasks/dbench validater/valgrind} 2
Failure Reason:

Command failed (workunit test suites/dbench.sh) on smithi067 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=95010940e57695d9647431563a9151dfcbae0003 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/dbench.sh'

pass 7699571 2024-05-09 05:02:31 2024-05-09 12:04:02 2024-05-09 12:48:12 0:44:10 0:34:27 0:09:43 smithi main ubuntu 22.04 smoke/basic/{clusters/{fixed-3-cephfs openstack} objectstore/bluestore-bitmap supported-random-distro$/{ubuntu_latest} tasks/{0-install test/rados_api_tests}} 3
pass 7699539 2024-05-09 03:11:12 2024-05-09 03:59:21 2024-05-09 04:28:09 0:28:48 0:17:10 0:11:38 smithi main ubuntu 22.04 orch:cephadm/smb/{0-distro/ubuntu_22.04 tasks/deploy_smb_mgr_res_basic} 2
pass 7699478 2024-05-09 03:10:08 2024-05-09 03:20:42 2024-05-09 04:00:20 0:39:38 0:29:25 0:10:13 smithi main centos 9.stream orch:cephadm/thrash/{0-distro/centos_9.stream 1-start 2-thrash 3-tasks/small-objects fixed-2 msgr/async-v1only root} 2
fail 7699451 2024-05-09 02:35:28 2024-05-09 02:38:23 2024-05-09 03:12:46 0:34:23 0:22:54 0:11:29 smithi main ubuntu 22.04 rados:thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-5} backoff/normal ceph clusters/{fixed-4 openstack} crc-failures/default d-balancer/read mon_election/connectivity msgr-failures/fastclose msgr/async-v1only objectstore/bluestore-stupid rados supported-random-distro$/{ubuntu_latest} thrashers/mapgap thrashosds-health workloads/cache-snaps-balanced} 4
Failure Reason:

reached maximum tries (91) after waiting for 540 seconds

pass 7699148 2024-05-08 22:11:53 2024-05-09 05:35:29 2024-05-09 06:54:02 1:18:33 1:05:26 0:13:07 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/fuse objectstore-ec/bluestore-comp-ec-root omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/1 standby-replay tasks/{0-subvolume/{with-no-extra-options} 1-check-counter 2-scrub/yes 3-snaps/yes 4-flush/no 5-quiesce/with-quiesce 6-workunit/postgres}} 3
pass 7699121 2024-05-08 22:11:24 2024-05-09 05:09:43 2024-05-09 05:35:22 0:25:39 0:14:00 0:11:39 smithi main ubuntu 22.04 fs/functional/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/1a3s-mds-4c-client conf/{client mds mgr mon osd} distro/{ubuntu_latest} mount/fuse objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile pg_health} subvol_versions/create_subvol_version_v1 tasks/pool-perm} 2
pass 7699073 2024-05-08 22:10:35 2024-05-09 04:22:45 2024-05-09 05:11:27 0:48:42 0:33:45 0:14:57 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/legacy wsync/no} objectstore-ec/bluestore-ec-root omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/random export-check n/5 replication/default} standby-replay tasks/{0-subvolume/{with-quota} 1-check-counter 2-scrub/no 3-snaps/no 4-flush/no 5-quiesce/with-quiesce 6-workunit/suites/pjd}} 3
pass 7698992 2024-05-08 22:09:09 2024-05-09 01:51:13 2024-05-09 02:38:46 0:47:33 0:34:31 0:13:02 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/stock/{centos_9.stream k-stock} ms-die-on-skipped}} ms_mode/secure wsync/yes} objectstore-ec/bluestore-bitmap omap_limit/10 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/automatic export-check n/5 replication/always} standby-replay tasks/{0-subvolume/{with-namespace-isolated} 1-check-counter 2-scrub/yes 3-snaps/yes 4-flush/no 5-quiesce/no 6-workunit/suites/fsx}} 3