Name Machine Type Up Locked Locked Since Locked By OS Type OS Version Arch Description
smithi085.front.sepia.ceph.com smithi True True 2023-06-12 21:20:30.153332 zack@teuthology x86_64 None
Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
fail 7301656 2023-06-12 20:35:40 2023-06-12 20:37:43 2023-06-12 21:04:03 0:26:20 0:16:46 0:09:34 smithi main ubuntu 20.04 rgw:crypt/{0-cluster/fixed-1 1-ceph-install/install 2-kms/vault_old 3-rgw/rgw 4-tests/{s3tests} ignore-pg-availability s3tests-branch supported-random-distro$/{ubuntu_20.04}} 1
Failure Reason:

name 'tvdir' is not defined

fail 7301578 2023-06-12 19:24:15 2023-06-12 20:19:59 2023-06-12 20:37:52 0:17:53 0:06:17 0:11:36 smithi main ubuntu 20.04 fs/snaps/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/fuse objectstore-ec/bluestore-comp overrides/{ignorelist_health ignorelist_wrongly_marked_down} tasks/workunit/snaps} 2
Failure Reason:

502 Server Error: Bad Gateway for url: https://1.chacra.ceph.com/repos/ceph/wip-yuri4-testing-2023-06-12-0753-quincy/2a9d6e18278d6560716f7708433741d1d561ab07/ubuntu/focal/flavors/default/repo

pass 7301456 2023-06-12 19:22:39 2023-06-12 19:23:33 2023-06-12 20:19:59 0:56:26 0:31:17 0:25:09 smithi main centos 8.stream fs/upgrade/featureful_client/upgraded_client/{bluestore-bitmap centos_latest clusters/1-mds-2-client-micro conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down multimds/yes pg-warn} tasks/{0-octopus 1-client 2-upgrade 3-client-upgrade 4-compat_client 5-client-sanity}} 3
dead 7301450 2023-06-12 18:32:28 2023-06-12 18:32:28 2023-06-12 18:47:24 0:14:56 0:05:00 0:09:56 smithi main centos 9.stream rgw:verify/{0-install clusters/fixed-2 datacache/no_datacache frontend/beast ignore-pg-availability inline-data$/{off} msgr-failures/few objectstore/bluestore-bitmap overrides proto/https rgw_pool_type/ec s3tests-branch sharding$/{default} striping$/{stripe-equals-chunk} supported-random-distro$/{centos_latest} tasks/{cls ragweed reshard s3tests-java s3tests} validater/lockdep} 2
Failure Reason:

{'smithi141.front.sepia.ceph.com': {'_ansible_no_log': False, 'changed': True, 'cmd': ['dnf', '-y', 'config-manager', '--set-enabled', 'powertools'], 'delta': '0:00:00.204232', 'end': '2023-06-12 18:46:01.208275', 'invocation': {'module_args': {'_raw_params': 'dnf -y config-manager --set-enabled powertools', '_uses_shell': False, 'argv': None, 'chdir': None, 'creates': None, 'executable': None, 'removes': None, 'stdin': None, 'stdin_add_newline': True, 'strip_empty_ends': True, 'warn': True}}, 'msg': 'non-zero return code', 'rc': 1, 'start': '2023-06-12 18:46:01.004043', 'stderr': 'Error: No matching repo to modify: powertools.', 'stderr_lines': ['Error: No matching repo to modify: powertools.'], 'stdout': '', 'stdout_lines': [], 'warnings': ["Consider using the dnf module rather than running 'dnf'. If you need to use command because dnf is insufficient you can add 'warn: false' to this command task or set 'command_warnings=False' in ansible.cfg to get rid of this message."]}, 'smithi085.front.sepia.ceph.com': {'_ansible_no_log': False, 'changed': True, 'cmd': ['dnf', '-y', 'config-manager', '--set-enabled', 'powertools'], 'delta': '0:00:00.204306', 'end': '2023-06-12 18:46:01.339921', 'invocation': {'module_args': {'_raw_params': 'dnf -y config-manager --set-enabled powertools', '_uses_shell': False, 'argv': None, 'chdir': None, 'creates': None, 'executable': None, 'removes': None, 'stdin': None, 'stdin_add_newline': True, 'strip_empty_ends': True, 'warn': True}}, 'msg': 'non-zero return code', 'rc': 1, 'start': '2023-06-12 18:46:01.135615', 'stderr': 'Error: No matching repo to modify: powertools.', 'stderr_lines': ['Error: No matching repo to modify: powertools.'], 'stdout': '', 'stdout_lines': [], 'warnings': ["Consider using the dnf module rather than running 'dnf'. If you need to use command because dnf is insufficient you can add 'warn: false' to this command task or set 'command_warnings=False' in ansible.cfg to get rid of this message."]}}

fail 7301382 2023-06-12 17:29:17 2023-06-12 17:30:12 2023-06-12 18:28:45 0:58:33 0:40:51 0:17:42 smithi main centos 8.stream crimson-rados/basic/{centos_latest clusters/fixed-2 crimson_qa_overrides deploy/ceph tasks/rados_api_tests} 2
Failure Reason:

Command failed on smithi085 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph pg dump --format=json'

pass 7301379 2023-06-12 14:58:06 2023-06-12 14:59:05 2023-06-12 15:23:05 0:24:00 0:13:25 0:10:35 smithi main ubuntu 20.04 netsplit/{ceph cluster msgr rados supported-random-distro$/{ubuntu_20.04} tests/mon_pool_ops} 3
fail 7301292 2023-06-12 07:24:30 2023-06-12 07:24:30 2023-06-12 11:12:26 3:47:56 3:30:16 0:17:40 smithi main centos 8.stream crimson-rados/basic/{centos_latest clusters/fixed-2 crimson_qa_overrides deploy/ceph tasks/rados_api_tests} 2
Failure Reason:

Command failed (workunit test rados/test.sh) on smithi085 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=99127073e7f0aa852eb13c3e1e3213137a8a55d1 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 CRIMSON_COMPAT=1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test.sh'

fail 7301238 2023-06-11 18:23:10 2023-06-11 18:24:01 2023-06-11 18:59:22 0:35:21 0:17:28 0:17:53 smithi main ubuntu 20.04 rgw/crypt/{0-cluster/fixed-1 1-ceph-install/install 2-kms/barbican 3-rgw/rgw 4-tests/{s3tests} ignore-pg-availability s3tests-branch supported-random-distro$/{ubuntu_20.04}} 1
Failure Reason:

Cannot create secret

pass 7301186 2023-06-11 16:59:09 2023-06-11 17:00:11 2023-06-11 17:49:07 0:48:56 0:33:43 0:15:13 smithi main centos 8.stream rgw/cloud-transition/{cluster overrides s3tests-branch supported-random-distro$/{centos_8} tasks/cloud_transition_s3tests} 1
pass 7301182 2023-06-11 01:35:48 2023-06-11 01:36:37 2023-06-11 06:11:05 4:34:28 4:21:06 0:13:22 smithi main ubuntu 20.04 upgrade:quincy-p2p/quincy-p2p-stress-split/{0-cluster/{openstack start} 1-ceph-install/quincy 1.1.short_pg_log 2-partial-upgrade/firsthalf 3-thrash/default 4-workload/{fsx radosbench rbd-cls rbd-import-export rbd_api readwrite snaps-few-objects} 5-finish-upgrade 6-final-workload/{rbd-python snaps-many-objects} objectstore/bluestore-stupid supported-all-distro/ubuntu_latest thrashosds-health} 3
pass 7301115 2023-06-10 14:24:47 2023-06-10 20:13:15 2023-06-10 22:13:32 2:00:17 1:45:13 0:15:04 smithi main centos 8.stream upgrade:quincy-x/stress-split/{0-distro/centos_8.stream_container_tools 0-roles 1-start 2-first-half-tasks/readwrite 3-stress-tasks/{radosbench rbd-cls rbd-import-export rbd_api readwrite snaps-few-objects} 4-second-half-tasks/rbd-import-export mon_election/connectivity} 2
pass 7301049 2023-06-10 14:24:21 2023-06-10 17:59:54 2023-06-10 20:18:23 2:18:29 2:11:15 0:07:14 smithi main rhel 8.6 upgrade:quincy-x/stress-split/{0-distro/rhel_8.6_container_tools_3.0 0-roles 1-start 2-first-half-tasks/rbd-cls 3-stress-tasks/{radosbench rbd-cls rbd-import-export rbd_api readwrite snaps-few-objects} 4-second-half-tasks/radosbench mon_election/connectivity} 2
fail 7300996 2023-06-10 14:24:01 2023-06-10 16:22:16 2023-06-10 18:01:38 1:39:22 1:33:02 0:06:20 smithi main rhel 8.6 upgrade:pacific-x/stress-split/{0-distro/rhel_8.6_container_tools_3.0 0-roles 1-start 2-first-half-tasks/radosbench 3-stress-tasks/{radosbench rbd-cls rbd-import-export rbd_api readwrite snaps-few-objects} 4-second-half-tasks/radosbench mon_election/connectivity} 2
Failure Reason:

Command failed (workunit test cls/test_cls_rbd.sh) on smithi085 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=pacific TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_rbd.sh'

fail 7300974 2023-06-10 14:23:52 2023-06-10 14:44:20 2023-06-10 16:22:28 1:38:08 1:13:51 0:24:17 smithi main centos 8.stream upgrade:pacific-x/stress-split/{0-distro/centos_8.stream_container_tools_crun 0-roles 1-start 2-first-half-tasks/rbd-cls 3-stress-tasks/{radosbench rbd-cls rbd-import-export rbd_api readwrite snaps-few-objects} 4-second-half-tasks/rbd-import-export mon_election/classic} 2
Failure Reason:

Command failed (workunit test cls/test_cls_rbd.sh) on smithi085 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=pacific TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_rbd.sh'

fail 7300668 2023-06-09 21:04:24 2023-06-10 05:40:55 2023-06-10 14:59:23 9:18:28 8:18:11 1:00:17 smithi main centos 8.stream rgw/multisite/{clusters frontend/beast ignore-pg-availability notify omap_limits overrides realms/two-zonegroup supported-random-distro$/{centos_8} tasks/test_multi} 2
Failure Reason:

rgw multisite test failures

pass 7300640 2023-06-09 21:03:57 2023-06-10 05:07:06 2023-06-10 05:43:32 0:36:26 0:26:13 0:10:13 smithi main ubuntu 22.04 rbd/mirror/{base/install clients/{mirror-extra mirror} cluster/{2-node openstack} msgr-failures/few objectstore/bluestore-hybrid supported-random-distro$/{ubuntu_latest} workloads/rbd-mirror-snapshot-workunit-fast-diff} 2
pass 7300546 2023-06-09 21:03:11 2023-06-10 03:55:01 2023-06-10 05:07:08 1:12:07 0:58:59 0:13:08 smithi main centos 8.stream rbd/mirror-thrash/{base/install clients/mirror cluster/{2-node openstack} msgr-failures/few objectstore/bluestore-comp-snappy policy/none rbd-mirror/four-per-cluster supported-random-distro$/{centos_8} workloads/rbd-mirror-snapshot-stress-workunit-fast-diff} 2
pass 7300515 2023-06-09 21:03:00 2023-06-10 03:33:35 2023-06-10 03:55:30 0:21:55 0:14:17 0:07:38 smithi main rhel 8.6 rbd/singleton/{all/read-flags-writethrough objectstore/bluestore-comp-snappy openstack supported-random-distro$/{rhel_8}} 1
pass 7300478 2023-06-09 21:02:46 2023-06-10 02:58:54 2023-06-10 03:34:23 0:35:29 0:25:46 0:09:43 smithi main rhel 8.6 rados/singleton/{all/thrash-rados/{thrash-rados thrashosds-health} mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{rhel_8}} 2
pass 7300436 2023-06-09 21:02:30 2023-06-10 02:22:10 2023-06-10 03:01:10 0:39:00 0:27:45 0:11:15 smithi main centos 8.stream rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/normal mon_election/connectivity msgr-failures/osd-dispatch-delay objectstore/bluestore-bitmap rados recovery-overrides/{more-active-recovery} supported-random-distro$/{centos_8} thrashers/fastread thrashosds-health workloads/ec-small-objects-fast-read} 2