Name | Machine Type | Up | Locked | Locked Since | Locked By | OS Type | OS Version | Arch | Description |
---|---|---|---|---|---|---|---|---|---|
smithi110.front.sepia.ceph.com | smithi | True | True | 2024-05-01 21:56:17.278738 | scheduled_vshankar@teuthology | centos | 9 | x86_64 | /home/teuthworker/archive/vshankar-2024-05-01_17:34:00-fs-wip-vshankar-testing-20240430.111407-debug-testing-default-smithi/7683791 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
running | 7683791 | 2024-05-01 17:37:22 | 2024-05-01 21:56:17 | 2024-05-01 22:09:28 | 0:13:46 | smithi | main | centos | 9.stream | fs/functional/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/1a3s-mds-4c-client conf/{client mds mgr mon osd} distro/{centos_latest} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/stock/{centos_9.stream k-stock} ms-die-on-skipped}} objectstore/bluestore-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile pg_health} subvol_versions/create_subvol_version_v2 tasks/workunit/dir-max-entries} | 2 | |||
pass | 7683721 | 2024-05-01 17:36:27 | 2024-05-01 20:18:44 | 2024-05-01 21:56:06 | 1:37:22 | 1:25:59 | 0:11:23 | smithi | main | centos | 9.stream | fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/fuse objectstore-ec/bluestore-comp-ec-root omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/automatic export-check n/3 replication/always} standby-replay tasks/{0-subvolume/{with-namespace-isolated-and-quota} 1-check-counter 2-scrub/yes 3-snaps/no 4-flush/yes 5-quiesce/no 6-workunit/suites/dbench}} | 3 | |
pass | 7683679 | 2024-05-01 17:35:53 | 2024-05-01 19:25:46 | 2024-05-01 20:18:41 | 0:52:55 | 0:41:49 | 0:11:06 | smithi | main | centos | 9.stream | fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/fuse objectstore-ec/bluestore-comp-ec-root omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/random export-check n/5 replication/always} standby-replay tasks/{0-subvolume/{with-no-extra-options} 1-check-counter 2-scrub/yes 3-snaps/no 4-flush/yes 5-quiesce/no 6-workunit/suites/fsstress}} | 3 | |
fail | 7683603 | 2024-05-01 17:34:50 | 2024-05-01 18:07:44 | 2024-05-01 19:24:17 | 1:16:33 | 1:02:50 | 0:13:43 | smithi | main | centos | 9.stream | fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/fuse objectstore-ec/bluestore-comp-ec-root omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/random export-check n/3 replication/always} standby-replay tasks/{0-subvolume/{with-no-extra-options} 1-check-counter 2-scrub/yes 3-snaps/no 4-flush/no 5-quiesce/no 6-workunit/postgres}} | 3 | |
Failure Reason:
The following counters failed to be set on mds daemons: {'mds.exported', 'mds.imported'} |
||||||||||||||
fail | 7683556 | 2024-05-01 15:19:21 | 2024-05-01 17:51:57 | 2024-05-01 18:06:11 | 0:14:14 | 0:05:18 | 0:08:56 | smithi | main | centos | 9.stream | rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-stupid rados supported-random-distro$/{centos_latest} tasks/scrub_test} | 2 | |
Failure Reason:
Command failed on smithi110 with status 1: 'sudo yum -y install ceph-radosgw ceph-test ceph ceph-base cephadm ceph-immutable-object-cache ceph-mgr ceph-mgr-dashboard ceph-mgr-diskprediction-local ceph-mgr-rook ceph-mgr-cephadm ceph-fuse ceph-volume librados-devel libcephfs2 libcephfs-devel librados2 librbd1 python3-rados python3-rgw python3-cephfs python3-rbd rbd-fuse rbd-mirror rbd-nbd sqlite-devel sqlite-devel sqlite-devel sqlite-devel' |
||||||||||||||
fail | 7683485 | 2024-05-01 15:17:49 | 2024-05-01 17:30:02 | 2024-05-01 17:45:31 | 0:15:29 | 0:05:33 | 0:09:56 | smithi | main | centos | 9.stream | rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/classic msgr-failures/osd-delay objectstore/bluestore-comp-lz4 rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{centos_latest} thrashers/careful thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} | 4 | |
Failure Reason:
Command failed on smithi069 with status 1: 'sudo yum -y install ceph-mgr-dashboard' |
||||||||||||||
dead | 7682943 | 2024-05-01 14:58:26 | 2024-05-01 15:00:07 | 2024-05-01 15:11:00 | 0:10:53 | smithi | main | centos | 9.stream | rados/singleton/{all/max-pg-per-osd.from-mon mon_election/classic msgr-failures/none msgr/async-v2only objectstore/bluestore-hybrid rados supported-random-distro$/{centos_latest}} | 1 | |||
pass | 7682900 | 2024-05-01 11:38:09 | 2024-05-01 14:12:09 | 2024-05-01 15:00:10 | 0:48:01 | 0:37:48 | 0:10:13 | smithi | main | ubuntu | 22.04 | orch:cephadm/no-agent-workunits/{0-distro/ubuntu_22.04 mon_election/connectivity task/test_orch_cli_mon} | 5 | |
pass | 7682859 | 2024-05-01 11:37:29 | 2024-05-01 13:41:04 | 2024-05-01 14:11:45 | 0:30:41 | 0:19:05 | 0:11:36 | smithi | main | ubuntu | 22.04 | orch:cephadm/smoke-roleless/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-services/basic 3-final} | 2 | |
fail | 7682817 | 2024-05-01 11:36:47 | 2024-05-01 12:55:49 | 2024-05-01 13:41:15 | 0:45:26 | 0:38:29 | 0:06:57 | smithi | main | centos | 9.stream | orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mgr mon osd} fail_fs/no overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn pg_health syntax} roles tasks/{0-from/quincy 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-inline/no 4-verify} 2-client/fuse 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} | 2 | |
Failure Reason:
reached maximum tries (51) after waiting for 300 seconds |
||||||||||||||
pass | 7682781 | 2024-05-01 10:04:07 | 2024-05-01 12:08:22 | 2024-05-01 12:52:01 | 0:43:39 | 0:29:50 | 0:13:49 | smithi | main | centos | 9.stream | fs/libcephfs/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/1-mds-1-client-coloc conf/{client mds mgr mon osd} distro/{centos_latest} objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile pg_health} tasks/libcephfs/{frag test}} | 2 | |
fail | 7682733 | 2024-05-01 09:57:38 | 2024-05-01 11:21:42 | 2024-05-01 11:55:20 | 0:33:38 | 0:22:20 | 0:11:18 | smithi | main | ubuntu | 22.04 | fs/volumes/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/1a3s-mds-4c-client conf/{client mds mgr mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile pg_health} tasks/volumes/{overrides test/basic}} | 2 | |
Failure Reason:
Test failure: test_volume_info_pending_subvol_deletions (tasks.cephfs.test_volumes.TestVolumes) |
||||||||||||||
fail | 7682707 | 2024-05-01 09:57:07 | 2024-05-01 10:50:17 | 2024-05-01 11:09:23 | 0:19:06 | 0:12:01 | 0:07:05 | smithi | main | centos | 9.stream | fs/volumes/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/1a3s-mds-4c-client conf/{client mds mgr mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/stock/{centos_9.stream k-stock} ms-die-on-skipped}} objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile pg_health} tasks/volumes/{overrides test/clone}} | 2 | |
Failure Reason:
Test failure: test_clone_failure_status_in_progress_cancelled (tasks.cephfs.test_volumes.TestSubvolumeSnapshotClones) |
||||||||||||||
pass | 7682565 | 2024-05-01 05:13:29 | 2024-05-01 06:02:42 | 2024-05-01 06:53:07 | 0:50:25 | 0:30:42 | 0:19:43 | smithi | main | centos | 9.stream | fs/libcephfs/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/1-mds-1-client-coloc conf/{client mds mgr mon osd} distro/{centos_latest} objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile pg_health} tasks/libcephfs/{frag test}} | 2 | |
pass | 7682346 | 2024-05-01 01:10:07 | 2024-05-01 15:11:13 | 2024-05-01 17:30:29 | 2:19:16 | 2:11:52 | 0:07:24 | smithi | main | rhel | 8.6 | upgrade:pacific-x/stress-split/{0-distro/rhel_8.6_container_tools_3.0 0-roles 1-start 2-first-half-tasks/readwrite 3-stress-tasks/{radosbench rbd-cls rbd-import-export rbd_api readwrite snaps-few-objects} 4-second-half-tasks/rbd-import-export mon_election/classic} | 2 | |
pass | 7682290 | 2024-05-01 01:09:09 | 2024-05-01 08:18:03 | 2024-05-01 10:48:36 | 2:30:33 | 2:21:59 | 0:08:34 | smithi | main | rhel | 8.6 | upgrade:pacific-x/stress-split/{0-distro/rhel_8.6_container_tools_3.0 0-roles 1-start 2-first-half-tasks/rbd_api 3-stress-tasks/{radosbench rbd-cls rbd-import-export rbd_api readwrite snaps-few-objects} 4-second-half-tasks/rbd-import-export mon_election/connectivity} | 2 | |
pass | 7682009 | 2024-04-30 21:41:07 | 2024-05-01 07:35:05 | 2024-05-01 08:18:43 | 0:43:38 | 0:36:22 | 0:07:16 | smithi | main | centos | 9.stream | rgw/verify/{0-install accounts$/{none} clusters/fixed-2 datacache/rgw-datacache frontend/beast ignore-pg-availability inline-data$/{on} msgr-failures/few objectstore/bluestore-bitmap overrides proto/http rgw_pool_type/ec-profile s3tests-branch sharding$/{default} striping$/{stripe-greater-than-chunk} supported-random-distro$/{centos_latest} tasks/{bucket-check cls mp_reupload ragweed reshard s3tests-java s3tests versioning} validater/lockdep} | 2 | |
pass | 7681966 | 2024-04-30 21:40:23 | 2024-05-01 06:53:13 | 2024-05-01 07:34:18 | 0:41:05 | 0:33:35 | 0:07:30 | smithi | main | centos | 9.stream | rgw/lifecycle/{cluster ignore-pg-availability overrides s3tests-branch supported-random-distro$/{centos_latest} tasks/rgw_s3tests} | 1 | |
fail | 7681404 | 2024-04-30 15:45:29 | 2024-04-30 16:04:29 | 2024-04-30 19:25:46 | 3:21:17 | 3:11:50 | 0:09:27 | smithi | main | centos | 9.stream | fs/libcephfs/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/1-mds-1-client-coloc conf/{client mds mgr mon osd} distro/{centos_latest} objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile pg_health} tasks/libcephfs/{frag test}} | 2 | |
Failure Reason:
Command failed (workunit test libcephfs/test.sh) on smithi073 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=fdca039ef5582086ed3d22c4e41f61c0f9f8048c TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/libcephfs/test.sh' |
||||||||||||||
pass | 7681195 | 2024-04-30 14:35:54 | 2024-04-30 19:34:00 | 2024-04-30 22:16:10 | 2:42:10 | 2:28:12 | 0:13:58 | smithi | main | ubuntu | 22.04 | fs/fscrypt/{begin/{0-install 1-ceph 2-logrotate 3-modules} bluestore-bitmap clusters/1-mds-1-client conf/{client mds mgr mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} overrides/{ignorelist_health ignorelist_health_more ignorelist_wrongly_marked_down osd pg-warn pg_health} tasks/{0-client 1-tests/fscrypt-iozone}} | 3 |