Name Machine Type Up Locked Locked Since Locked By OS Type OS Version Arch Description
smithi099.front.sepia.ceph.com smithi True True 2024-04-14 12:04:51.862625 jenkins-build@teuthology centos 9 x86_64 Locked to capture FOG image for Jenkins build 801
Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
dead 7655246 2024-04-13 21:18:35 2024-04-13 23:18:58 2024-04-14 11:28:49 12:09:51 smithi main centos 9.stream rbd/mirror-thrash/{base/install clients/mirror cluster/{2-node openstack} conf/{disable-pool-app} msgr-failures/few objectstore/bluestore-stupid policy/simple rbd-mirror/four-per-cluster supported-random-distro$/{centos_latest} workloads/rbd-mirror-snapshot-stress-workunit-exclusive-lock} 2
Failure Reason:

hit max job timeout

dead 7655230 2024-04-13 21:18:18 2024-04-13 22:58:49 2024-04-13 23:18:42 0:19:53 smithi main centos 9.stream rbd/librbd/{cache/writethrough clusters/{fixed-3 openstack} conf/{disable-pool-app} data-pool/replicated extra-conf/permit-partial-discard min-compat-client/octopus msgr-failures/few objectstore/bluestore-stupid supported-random-distro$/{centos_latest} workloads/c_api_tests} 3
Failure Reason:

Error reimaging machines: reached maximum tries (101) after waiting for 600 seconds

pass 7655192 2024-04-13 21:17:39 2024-04-13 22:14:27 2024-04-13 22:59:06 0:44:39 0:35:02 0:09:37 smithi main ubuntu 22.04 rbd/librbd/{cache/writearound clusters/{fixed-3 openstack} conf/{disable-pool-app} data-pool/replicated extra-conf/permit-partial-discard min-compat-client/octopus msgr-failures/few objectstore/bluestore-comp-snappy supported-random-distro$/{ubuntu_latest} workloads/c_api_tests_with_journaling} 3
pass 7655001 2024-04-13 11:34:22 2024-04-13 11:52:24 2024-04-13 14:35:56 2:43:32 2:36:46 0:06:46 smithi main rhel 8.6 upgrade/quincy-x/stress-split/{0-distro/rhel_8.6_container_tools_3.0 0-roles 1-start 2-first-half-tasks/radosbench 3-stress-tasks/{radosbench rbd-cls rbd-import-export rbd_api readwrite snaps-few-objects} 4-second-half-tasks/radosbench mon_election/classic} 2
pass 7654917 2024-04-12 23:24:16 2024-04-13 00:07:25 2024-04-13 00:29:33 0:22:08 0:12:41 0:09:27 smithi main ubuntu 22.04 rgw/multifs/{clusters/fixed-2 frontend/beast ignore-pg-availability objectstore/bluestore-bitmap overrides rgw_pool_type/ec-profile s3tests-branch tasks/rgw_ragweed ubuntu_latest} 2
pass 7654877 2024-04-12 23:23:42 2024-04-12 23:34:31 2024-04-13 00:07:51 0:33:20 0:22:51 0:10:29 smithi main ubuntu 22.04 rgw/multifs/{clusters/fixed-2 frontend/beast ignore-pg-availability objectstore/bluestore-bitmap overrides rgw_pool_type/ec-profile s3tests-branch tasks/rgw_s3tests ubuntu_latest} 2
pass 7654840 2024-04-12 22:41:37 2024-04-13 21:10:15 2024-04-13 22:14:20 1:04:05 0:52:44 0:11:21 smithi main centos 8.stream rgw/verify/{0-install clusters/fixed-2 datacache/no_datacache frontend/beast ignore-pg-availability inline-data$/{on} msgr-failures/few objectstore/bluestore-bitmap overrides proto/http rgw_pool_type/ec s3tests-branch sharding$/{single} striping$/{stripe-greater-than-chunk} supported-random-distro$/{centos_8} tasks/{bucket-check cls mp_reupload ragweed reshard s3tests-java s3tests versioning} validater/valgrind} 2
pass 7654729 2024-04-12 21:40:41 2024-04-13 00:51:33 2024-04-13 01:44:55 0:53:22 0:42:39 0:10:43 smithi main ubuntu 22.04 rgw/verify/{0-install clusters/fixed-2 datacache/no_datacache frontend/beast ignore-pg-availability inline-data$/{off} msgr-failures/few objectstore/bluestore-bitmap overrides proto/http rgw_pool_type/ec s3tests-branch sharding$/{single} striping$/{stripe-equals-chunk} supported-random-distro$/{ubuntu_latest} tasks/{bucket-check cls mp_reupload ragweed reshard s3tests-java s3tests versioning} validater/lockdep} 2
pass 7654708 2024-04-12 21:40:21 2024-04-13 00:29:43 2024-04-13 00:52:45 0:23:02 0:14:00 0:09:02 smithi main centos 9.stream rgw/dbstore/{cluster ignore-pg-availability overrides s3tests-branch supported-random-distro$/{centos_latest} tasks/rgw_s3tests} 1
pass 7654677 2024-04-12 21:11:06 2024-04-12 23:13:32 2024-04-12 23:34:27 0:20:55 0:12:11 0:08:44 smithi main centos 9.stream orch/cephadm/orchestrator_cli/{0-random-distro$/{centos_9.stream_runc} 2-node-mgr agent/on orchestrator_cli} 2
pass 7654644 2024-04-12 21:10:33 2024-04-12 22:48:06 2024-04-12 23:13:40 0:25:34 0:15:47 0:09:47 smithi main centos 9.stream orch/cephadm/workunits/{0-distro/centos_9.stream agent/off mon_election/classic task/test_host_drain} 3
pass 7654611 2024-04-12 21:10:00 2024-04-12 22:26:21 2024-04-12 22:47:56 0:21:35 0:11:55 0:09:40 smithi main centos 9.stream orch/cephadm/smoke-small/{0-distro/centos_9.stream_runc 0-nvme-loop agent/off fixed-2 mon_election/connectivity start} 3
fail 7654408 2024-04-12 17:37:16 2024-04-13 05:45:34 2024-04-13 09:55:43 4:10:09 3:55:26 0:14:43 smithi main centos 9.stream fs/upgrade/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mgr mon osd} fail_fs/yes overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn pg_health syntax} roles tasks/{0-from/quincy 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-inline/yes 4-verify} 2-client/kclient 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} 2
Failure Reason:

Command failed (workunit test suites/fsstress.sh) on smithi181 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && cd -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=218a691417ef57c5d22a8d5cf3c5f41f7b542838 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="1" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.1 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.1 CEPH_MNT=/home/ubuntu/cephtest/mnt.1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.1/qa/workunits/suites/fsstress.sh'

pass 7654356 2024-04-12 17:36:33 2024-04-13 04:55:05 2024-04-13 05:48:14 0:53:09 0:38:34 0:14:35 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/fuse objectstore-ec/bluestore-ec-root omap_limit/10 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/automatic export-check n/5 replication/always} standby-replay tasks/{0-subvolume/{with-namespace-isolated-and-quota} 1-check-counter 2-scrub/yes 3-snaps/no 4-flush/no 5-workunit/suites/fsstress}} 3
fail 7654293 2024-04-12 17:35:41 2024-04-13 03:52:35 2024-04-13 04:40:29 0:47:54 0:36:57 0:10:57 smithi main ubuntu 22.04 fs/thrash/multifs/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/1a3s-mds-2c-client conf/{client mds mgr mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} msgr-failures/none objectstore/bluestore-bitmap overrides/{client-shutdown frag ignorelist_health ignorelist_wrongly_marked_down multifs pg_health session_timeout thrashosds-health} tasks/{1-thrash/mon 2-workunit/cfuse_workunit_snaptests}} 2
Failure Reason:

Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi099 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=218a691417ef57c5d22a8d5cf3c5f41f7b542838 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/fs/snaps/snaptest-multiple-capsnaps.sh'

pass 7654233 2024-04-12 17:34:52 2024-04-13 02:48:32 2024-04-13 03:53:13 1:04:41 0:52:52 0:11:49 smithi main ubuntu 22.04 fs/functional/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/1a3s-mds-4c-client conf/{client mds mgr mon osd} distro/{ubuntu_latest} mount/fuse objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile pg_health} subvol_versions/create_subvol_version_v1 tasks/exports} 2
fail 7654131 2024-04-12 15:16:53 2024-04-12 15:35:50 2024-04-12 17:33:02 1:57:12 1:48:24 0:08:48 smithi main centos 9.stream rados/upgrade/parallel/{0-random-distro$/{centos_9.stream_runc} 0-start 1-tasks mon_election/classic upgrade-sequence workload/{ec-rados-default rados_api rados_loadgenbig rbd_import_export test_rbd_api test_rbd_python}} 2
Failure Reason:

"2024-04-12T16:05:30.463879+0000 mon.a (mon.0) 352 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)" in cluster log

pass 7653871 2024-04-12 12:45:40 2024-04-12 13:22:46 2024-04-12 13:45:43 0:22:57 0:13:03 0:09:54 smithi main centos 9.stream rgw/singleton/{all/radosgw-admin frontend/beast ignore-pg-availability objectstore/bluestore-bitmap overrides rgw_pool_type/replicated supported-random-distro$/{centos_latest}} 2
pass 7653823 2024-04-12 12:44:59 2024-04-12 12:54:29 2024-04-12 13:22:59 0:28:30 0:15:54 0:12:36 smithi main centos 9.stream rgw/notifications/{beast bluestore-bitmap fixed-2 ignore-pg-availability overrides tasks/amqp/{0-install centos_latest test_amqp}} 2
fail 7653757 2024-04-12 09:02:54 2024-04-12 09:03:59 2024-04-12 12:37:25 3:33:26 3:22:44 0:10:42 smithi main centos 9.stream upgrade/reef-x/stress-split/{0-distro/centos_9.stream 0-roles 1-start 2-first-half-tasks/radosbench 3-stress-tasks/{radosbench rbd-cls rbd-import-export rbd_api readwrite snaps-few-objects} 4-second-half-tasks/radosbench mon_election/classic} 2
Failure Reason:

"2024-04-12T10:57:38.744468+0000 osd.1 (osd.1) 20 : cluster [ERR] 61.9 soid 61:90b0d9bc:::smithi099682301-188:head : object info inconsistent , snapset inconsistent , attr value mismatch '__header'" in cluster log