Name | Machine Type | Up | Locked | Locked Since | Locked By | OS Type | OS Version | Arch | Description |
---|---|---|---|---|---|---|---|---|---|
smithi055.front.sepia.ceph.com | smithi | True | True | 2024-05-21 21:05:03.054074 | scheduled_yuriw@teuthology | ubuntu | 22.04 | x86_64 | /home/teuthworker/archive/yuriw-2024-05-21_01:07:55-rados-wip-yuri7-testing-2024-05-20-1227-distro-default-smithi/7717805 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
fail | 7718704 | 2024-05-21 14:53:49 | 2024-05-21 16:03:35 | 2024-05-21 16:25:18 | 0:21:43 | 0:11:56 | 0:09:47 | smithi | main | ubuntu | 22.04 | orch:cephadm/with-work/{0-distro/ubuntu_22.04 fixed-2 mode/root mon_election/connectivity msgr/async-v1only start tasks/rados_api_tests} | 2 | |
Failure Reason:
Command failed on smithi055 with status 1: 'set -ex\ndd of=/root/logrotate.conf' |
||||||||||||||
pass | 7718582 | 2024-05-21 13:19:10 | 2024-05-21 15:04:46 | 2024-05-21 16:03:33 | 0:58:47 | 0:50:53 | 0:07:54 | smithi | main | centos | 8.stream | fs/upgrade/upgraded_client/{bluestore-bitmap branch/pacific centos_8.stream clusters/{1-mds-1-client-micro} conf/{client mds mgr mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn pg_health} tasks/{0-install 1-mount/mount/fuse 2-clients/fuse-upgrade 3-workload/stress_tests/dbench}} | 2 | |
fail | 7718551 | 2024-05-21 13:18:37 | 2024-05-21 14:21:52 | 2024-05-21 15:04:50 | 0:42:58 | 0:34:37 | 0:08:21 | smithi | main | centos | 9.stream | fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/stock/{centos_9.stream k-stock} ms-die-on-skipped}} ms_mode/crc wsync/yes} objectstore-ec/bluestore-bitmap omap_limit/10 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/1 standby-replay tasks/{0-subvolume/{with-no-extra-options} 1-check-counter 2-scrub/yes 3-snaps/yes 4-flush/yes 5-workunit/suites/fsx}} | 3 | |
Failure Reason:
Command failed (workunit test suites/fsx.sh) on smithi028 with status 2: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=d2b507c724ef6c8295ee0a35d185cdab167fd61d TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/fsx.sh' |
||||||||||||||
fail | 7718512 | 2024-05-21 13:02:25 | 2024-05-21 13:19:58 | 2024-05-21 14:18:58 | 0:59:00 | 0:46:14 | 0:12:46 | smithi | install-fix | centos | 9.stream | fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/legacy wsync/yes} objectstore-ec/bluestore-bitmap omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/random export-check n/5 replication/default} standby-replay tasks/{0-subvolume/{with-quota} 1-check-counter 2-scrub/yes 3-snaps/no 4-flush/yes 5-quiesce/with-quiesce 6-workunit/postgres}} | 3 | |
Failure Reason:
"2024-05-21T13:50:00.000242+0000 mon.a (mon.0) 985 : cluster [WRN] application not enabled on pool 'cephfs_metadata'" in cluster log |
||||||||||||||
running | 7717805 | 2024-05-21 01:15:12 | 2024-05-21 21:03:42 | 2024-05-21 21:30:45 | 0:29:02 | smithi | main | ubuntu | 22.04 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/normal ceph clusters/{fixed-4 openstack} crc-failures/default d-balancer/read mon_election/connectivity msgr-failures/osd-dispatch-delay msgr/async-v1only objectstore/bluestore-stupid rados supported-random-distro$/{ubuntu_latest} thrashers/morepggrow thrashosds-health workloads/cache-agent-big} | 4 | |||
pass | 7717759 | 2024-05-21 01:14:20 | 2024-05-21 20:37:25 | 2024-05-21 21:04:57 | 0:27:32 | 0:18:27 | 0:09:05 | smithi | main | ubuntu | 22.04 | rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/many msgr/async-v2only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{ubuntu_latest} tasks/repair_test} | 2 | |
dead | 7717752 | 2024-05-21 01:14:13 | 2024-05-21 20:30:41 | 2024-05-21 20:33:25 | 0:02:44 | smithi | main | centos | 9.stream | rados/cephadm/osds/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-ops/rmdir-reactivate} | 2 | |||
Failure Reason:
Error reimaging machines: Failed to power on smithi172 |
||||||||||||||
pass | 7717702 | 2024-05-21 01:13:12 | 2024-05-21 19:55:07 | 2024-05-21 20:32:19 | 0:37:12 | 0:25:28 | 0:11:44 | smithi | main | centos | 9.stream | rados/thrash-erasure-code/{ceph clusters/{fixed-4 openstack} fast/normal mon_election/connectivity msgr-failures/osd-dispatch-delay objectstore/bluestore-hybrid rados recovery-overrides/{more-async-recovery} supported-random-distro$/{centos_latest} thrashers/morepggrow thrashosds-health workloads/ec-rados-plugin=jerasure-k=2-m=1} | 4 | |
pass | 7717640 | 2024-05-21 01:12:03 | 2024-05-21 19:19:17 | 2024-05-21 19:57:09 | 0:37:52 | 0:24:52 | 0:13:00 | smithi | main | ubuntu | 22.04 | rados/thrash-erasure-code/{ceph clusters/{fixed-4 openstack} fast/normal mon_election/connectivity msgr-failures/few objectstore/bluestore-comp-zlib rados recovery-overrides/{more-async-recovery} supported-random-distro$/{ubuntu_latest} thrashers/fastread thrashosds-health workloads/ec-small-objects} | 4 | |
pass | 7717601 | 2024-05-21 01:11:22 | 2024-05-21 18:46:30 | 2024-05-21 19:21:26 | 0:34:56 | 0:25:49 | 0:09:07 | smithi | main | centos | 9.stream | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-1} backoff/normal ceph clusters/{fixed-4 openstack} crc-failures/default d-balancer/crush-compat mon_election/connectivity msgr-failures/few msgr/async-v1only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{centos_latest} thrashers/morepggrow thrashosds-health workloads/snaps-few-objects-balanced} | 4 | |
dead | 7717456 | 2024-05-20 23:58:11 | 2024-05-21 00:39:22 | 2024-05-21 12:48:46 | 12:09:24 | smithi | install-fix | centos | 9.stream | fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/secure wsync/no} objectstore-ec/bluestore-bitmap omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/random export-check n/3 replication/default} standby-replay tasks/{0-subvolume/{with-namespace-isolated-and-quota} 1-check-counter 2-scrub/yes 3-snaps/no 4-flush/no 5-quiesce/with-quiesce 6-workunit/suites/dbench}} | 3 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 7717374 | 2024-05-20 23:07:48 | 2024-05-20 23:10:40 | 2024-05-21 00:40:23 | 1:29:43 | 1:18:33 | 0:11:10 | smithi | install-fix | centos | 9.stream | fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/secure wsync/no} objectstore-ec/bluestore-bitmap omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/1 standby-replay tasks/{0-subvolume/{with-namespace-isolated-and-quota} 1-check-counter 2-scrub/yes 3-snaps/no 4-flush/no 5-quiesce/with-quiesce 6-workunit/fs/misc}} | 3 | |
pass | 7716902 | 2024-05-20 20:52:34 | 2024-05-21 16:54:30 | 2024-05-21 18:46:19 | 1:51:49 | 1:40:56 | 0:10:53 | smithi | main | ubuntu | 22.04 | rados/thrash-erasure-code/{ceph clusters/{fixed-4 openstack} fast/fast mon_election/classic msgr-failures/osd-delay objectstore/bluestore-bitmap rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{ubuntu_latest} thrashers/default thrashosds-health workloads/ec-radosbench} | 4 | |
pass | 7716860 | 2024-05-20 20:51:48 | 2024-05-21 16:25:10 | 2024-05-21 16:54:33 | 0:29:23 | 0:20:15 | 0:09:08 | smithi | main | centos | 9.stream | rados/singleton/{all/thrash-rados/{thrash-rados thrashosds-health} mon_election/connectivity msgr-failures/many msgr/async-v1only objectstore/bluestore-comp-zstd rados supported-random-distro$/{centos_latest}} | 2 | |
pass | 7716816 | 2024-05-20 20:51:01 | 2024-05-21 12:45:36 | 2024-05-21 13:21:47 | 0:36:11 | 0:24:22 | 0:11:49 | smithi | main | centos | 8.stream | rados/thrash-old-clients/{0-distro$/{centos_8.stream_container_tools} 0-size-min-size-overrides/3-size-2-min-size 1-install/octopus backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/on mon_election/connectivity msgr-failures/few rados thrashers/pggrow thrashosds-health workloads/test_rbd_api} | 3 | |
pass | 7716633 | 2024-05-20 20:48:26 | 2024-05-20 21:30:29 | 2024-05-20 21:51:12 | 0:20:43 | 0:11:11 | 0:09:32 | smithi | main | centos | 9.stream | orch:cephadm/smb/{0-distro/centos_9.stream tasks/deploy_smb_domain} | 2 | |
fail | 7715983 | 2024-05-20 18:43:22 | 2024-05-20 20:28:06 | 2024-05-20 21:29:04 | 1:00:58 | 0:51:40 | 0:09:18 | smithi | install-fix | centos | 9.stream | fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/fuse objectstore-ec/bluestore-comp omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/random export-check n/3 replication/default} standby-replay tasks/{0-subvolume/{with-namespace-isolated} 1-check-counter 2-scrub/yes 3-snaps/no 4-flush/yes 5-quiesce/with-quiesce 6-workunit/suites/dbench}} | 3 | |
Failure Reason:
Command failed (workunit test suites/dbench.sh) on smithi026 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=4ebd1d5361d457738957dc0455bba90102296634 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/dbench.sh' |
||||||||||||||
fail | 7715905 | 2024-05-20 18:41:59 | 2024-05-20 19:08:54 | 2024-05-20 20:21:19 | 1:12:25 | 1:01:38 | 0:10:47 | smithi | install-fix | centos | 9.stream | fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/secure wsync/no} objectstore-ec/bluestore-comp omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/automatic export-check n/5 replication/always} standby-replay tasks/{0-subvolume/{with-no-extra-options} 1-check-counter 2-scrub/yes 3-snaps/yes 4-flush/yes 5-quiesce/with-quiesce 6-workunit/suites/blogbench}} | 3 | |
Failure Reason:
"2024-05-20T19:48:56.753348+0000 mds.i (mds.0) 29 : cluster [WRN] Scrub error on inode 0x10000000be0 (/volumes/qa/sv_1/f0fb22d4-1a4e-4cb7-b71a-0600344e0bc3/client.0/tmp/blogbench-1.0/src/blogtest_in/blog-13) see mds.i log and `damage ls` output for details" in cluster log |
||||||||||||||
pass | 7715802 | 2024-05-20 18:24:29 | 2024-05-20 22:11:25 | 2024-05-20 23:11:13 | 0:59:48 | 0:48:32 | 0:11:16 | smithi | main | centos | 9.stream | orch:cephadm/upgrade/{1-start-distro/1-start-centos_9.stream 2-repo_digest/defaut 3-upgrade/staggered 4-wait 5-upgrade-ls agent/off mon_election/classic} | 2 | |
pass | 7715767 | 2024-05-20 15:49:25 | 2024-05-20 21:50:27 | 2024-05-20 22:12:08 | 0:21:41 | 0:11:44 | 0:09:57 | smithi | main | centos | 9.stream | fs/functional/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/1a3s-mds-4c-client conf/{client mds mgr mon osd} distro/{centos_latest} mount/fuse objectstore/bluestore-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile pg_health} subvol_versions/create_subvol_version_v2 tasks/openfiletable} | 2 |