Name | Machine Type | Up | Locked | Locked Since | Locked By | OS Type | OS Version | Arch | Description |
---|---|---|---|---|---|---|---|---|---|
smithi063.front.sepia.ceph.com | smithi | True | True | 2023-05-15 09:50:59.760912 | amathuri@teuthology | ubuntu | 22.04 | x86_64 | None |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
pass | 7273737 |
![]() |
2023-05-15 07:31:27 | 2023-05-15 07:32:13 | 2023-05-15 08:57:16 | 1:25:03 | 1:01:56 | 0:23:07 | smithi | main | ubuntu | 22.04 | fs/multiclient/{begin/{0-install 1-ceph 2-logrotate} clusters/1-mds-2-client conf/{client mds mon osd} distros/ubuntu_latest mount/fuse objectstore-ec/bluestore-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down} tasks/cephfs_misc_tests} | 4 |
dead | 7273452 |
![]() |
2023-05-13 14:24:18 | 2023-05-13 14:25:13 | 2023-05-14 02:43:38 | 12:18:25 | smithi | main | rhel | 8.6 | upgrade:pacific-x/stress-split/{0-distro/rhel_8.6_container_tools_rhel8 0-roles 1-start 2-first-half-tasks/rbd-import-export 3-stress-tasks/{radosbench rbd-cls rbd-import-export rbd_api readwrite snaps-few-objects} 4-second-half-tasks/radosbench mon_election/classic} | 2 | ||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 7273233 |
![]() |
2023-05-12 19:31:24 | 2023-05-13 05:16:28 | 2023-05-13 09:16:18 | 3:59:50 | 3:44:56 | 0:14:54 | smithi | main | ubuntu | 22.04 | rbd/encryption/{cache/none clusters/{fixed-3 openstack} features/defaults msgr-failures/few objectstore/bluestore-comp-snappy pool/none supported-random-distro$/{ubuntu_latest} workloads/qemu_xfstests_none_luks2} | 3 |
pass | 7273201 |
![]() |
2023-05-12 19:31:11 | 2023-05-13 04:50:36 | 2023-05-13 05:15:45 | 0:25:09 | 0:12:43 | 0:12:26 | smithi | main | ubuntu | 22.04 | rbd/singleton/{all/snap-diff objectstore/bluestore-comp-zlib openstack supported-random-distro$/{ubuntu_latest}} | 1 |
pass | 7273177 |
![]() |
2023-05-12 19:31:01 | 2023-05-13 04:27:19 | 2023-05-13 04:48:37 | 0:21:18 | 0:10:45 | 0:10:33 | smithi | main | ubuntu | 20.04 | rbd/singleton/{all/read-flags-writeback objectstore/bluestore-comp-lz4 openstack supported-random-distro$/{ubuntu_20.04}} | 1 |
dead | 7273169 |
![]() ![]() |
2023-05-12 19:30:58 | 2023-05-13 04:18:32 | 2023-05-13 04:26:29 | 0:07:57 | 0:02:01 | 0:05:56 | smithi | main | ubuntu | 22.04 | rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/osd-dispatch-delay objectstore/bluestore-comp-zstd rados recovery-overrides/{default} supported-random-distro$/{ubuntu_latest} thrashers/careful thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} | 2 |
Failure Reason:
{'smithi063.front.sepia.ceph.com': {'changed': False, 'msg': 'All items completed', 'results': [{'_ansible_item_label': {'key': 'vg_nvme', 'value': {'pvs': '/dev/nvme0n1'}}, '_ansible_no_log': False, 'ansible_loop_var': 'item', 'changed': False, 'err': " /dev/vg_nvme: already exists in filesystem\n Run `vgcreate --help' for more information.\n", 'failed': True, 'invocation': {'module_args': {'force': False, 'pesize': '4', 'pv_options': '', 'pvresize': False, 'pvs': ['/dev/nvme0n1'], 'state': 'present', 'vg': 'vg_nvme', 'vg_options': ''}}, 'item': {'key': 'vg_nvme', 'value': {'pvs': '/dev/nvme0n1'}}, 'msg': "Creating volume group 'vg_nvme' failed", 'rc': 3}]}, 'smithi148.front.sepia.ceph.com': {'changed': False, 'msg': 'All items completed', 'results': [{'_ansible_item_label': {'key': 'vg_nvme', 'value': {'pvs': '/dev/nvme0n1'}}, '_ansible_no_log': False, 'ansible_loop_var': 'item', 'changed': False, 'err': " /dev/vg_nvme: already exists in filesystem\n Run `vgcreate --help' for more information.\n", 'failed': True, 'invocation': {'module_args': {'force': False, 'pesize': '4', 'pv_options': '', 'pvresize': False, 'pvs': ['/dev/nvme0n1'], 'state': 'present', 'vg': 'vg_nvme', 'vg_options': ''}}, 'item': {'key': 'vg_nvme', 'value': {'pvs': '/dev/nvme0n1'}}, 'msg': "Creating volume group 'vg_nvme' failed", 'rc': 3}]}} |
||||||||||||||
pass | 7273123 |
![]() |
2023-05-12 19:30:38 | 2023-05-13 03:42:32 | 2023-05-13 04:18:32 | 0:36:00 | 0:21:27 | 0:14:33 | smithi | main | ubuntu | 22.04 | rbd/thrash/{base/install clusters/{fixed-2 openstack} msgr-failures/few objectstore/bluestore-stupid supported-random-distro$/{ubuntu_latest} thrashers/cache thrashosds-health workloads/rbd_fsx_deep_copy} | 2 |
pass | 7273067 |
![]() |
2023-05-12 19:30:15 | 2023-05-13 03:06:39 | 2023-05-13 03:42:48 | 0:36:09 | 0:24:40 | 0:11:29 | smithi | main | ubuntu | 20.04 | rbd/librbd/{cache/none clusters/{fixed-3 openstack} config/permit-partial-discard min-compat-client/octopus msgr-failures/few objectstore/bluestore-comp-zstd pool/ec-data-pool supported-random-distro$/{ubuntu_20.04} workloads/fsx} | 3 |
pass | 7273036 |
![]() |
2023-05-12 19:30:02 | 2023-05-13 02:43:41 | 2023-05-13 03:06:47 | 0:23:06 | 0:11:26 | 0:11:40 | smithi | main | centos | 8.stream | rados/singleton/{all/pg-removal-interruption mon_election/connectivity msgr-failures/many msgr/async objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{centos_8}} | 1 |
pass | 7272864 |
![]() |
2023-05-12 19:28:45 | 2023-05-13 01:00:06 | 2023-05-13 02:43:40 | 1:43:34 | 1:27:10 | 0:16:24 | smithi | main | centos | 8.stream | rados/upgrade/parallel/{0-random-distro$/{centos_8.stream_container_tools} 0-start 1-tasks mon_election/classic upgrade-sequence workload/{ec-rados-default rados_api rados_loadgenbig rbd_import_export test_rbd_api test_rbd_python}} | 2 |
dead | 7272855 |
![]() |
2023-05-12 19:28:41 | 2023-05-13 00:56:33 | 2023-05-13 00:59:54 | 0:03:21 | smithi | main | centos | 8.stream | fs/32bits/{begin/{0-install 1-ceph 2-logrotate} clusters/fixed-2-ucephfs conf/{client mds mon osd} distro/{centos_8} mount/fuse objectstore-ec/bluestore-bitmap overrides/{faked-ino ignorelist_health ignorelist_wrongly_marked_down} tasks/cfuse_workunit_suites_pjd} | 2 | ||
Failure Reason:
Error reimaging machines: Failed to power on smithi063 |
||||||||||||||
dead | 7272846 |
![]() |
2023-05-12 19:28:38 | 2023-05-13 00:52:40 | 2023-05-13 00:55:49 | 0:03:09 | smithi | main | rhel | 8.6 | fs/thrash/multifs/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-2c-client conf/{client mds mon osd} distro/{ubuntu_20.04} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore/bluestore-bitmap overrides/{frag ignorelist_health ignorelist_wrongly_marked_down multifs session_timeout thrashosds-health} tasks/{1-thrash/mon 2-workunit/cfuse_workunit_suites_pjd}} | 2 | ||
Failure Reason:
Error reimaging machines: Failed to power on smithi026 |
||||||||||||||
fail | 7272798 |
![]() ![]() |
2023-05-12 19:28:18 | 2023-05-13 00:22:55 | 2023-05-13 00:52:04 | 0:29:09 | 0:17:12 | 0:11:57 | smithi | main | ubuntu | 22.04 | fs/multiclient/{begin/{0-install 1-ceph 2-logrotate} clusters/1-mds-3-client conf/{client mds mon osd} distros/ubuntu_latest mount/fuse objectstore-ec/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down} tasks/mdtest} | 5 |
Failure Reason:
Command failed on smithi063 with status 2: 'TESTDIR=/home/ubuntu/cephtest bash -s' |
||||||||||||||
fail | 7272758 |
![]() |
2023-05-12 19:28:01 | 2023-05-12 23:44:00 | 2023-05-13 00:23:27 | 0:39:27 | 0:31:15 | 0:08:12 | smithi | main | rhel | 8.6 | fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} ms_mode/crc wsync/no} objectstore-ec/bluestore-comp omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{export-check n/3 replication/always} standby-replay tasks/{0-subvolume/{with-namespace-isolated} 1-check-counter 2-scrub/yes 3-snaps/no 4-flush/yes 5-workunit/fs/norstats}} | 3 |
Failure Reason:
error during scrub thrashing: rank damage found: {'backtrace'} |
||||||||||||||
fail | 7272630 |
![]() ![]() |
2023-05-12 19:27:16 | 2023-05-12 21:36:55 | 2023-05-12 23:46:12 | 2:09:17 | 1:58:30 | 0:10:47 | smithi | main | ubuntu | 20.04 | fs/verify/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu/{latest overrides}} mount/fuse objectstore-ec/bluestore-comp-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down mon-debug session_timeout} ranks/1 tasks/dbench validater/valgrind} | 2 |
Failure Reason:
saw valgrind issues |
||||||||||||||
pass | 7272585 |
![]() |
2023-05-12 19:26:57 | 2023-05-12 21:03:21 | 2023-05-12 21:36:49 | 0:33:28 | 0:21:02 | 0:12:26 | smithi | main | ubuntu | 22.04 | fs/traceless/{begin/{0-install 1-ceph 2-logrotate} clusters/fixed-2-ucephfs conf/{client mds mon osd} distro/{ubuntu_latest} mount/fuse objectstore-ec/bluestore-bitmap overrides/{frag ignorelist_health ignorelist_wrongly_marked_down} tasks/cfuse_workunit_suites_fsstress traceless/50pc} | 2 |
pass | 7272534 |
![]() |
2023-05-12 19:26:31 | 2023-05-12 20:12:50 | 2023-05-12 21:03:56 | 0:51:06 | 0:37:02 | 0:14:04 | smithi | main | ubuntu | 22.04 | rgw/thrash/{clusters/fixed-2 frontend/beast install objectstore/bluestore-bitmap s3tests-branch thrasher/default thrashosds-health ubuntu_latest workload/rgw_s3tests} | 2 |
pass | 7272510 |
![]() |
2023-05-12 19:26:13 | 2023-05-12 19:34:03 | 2023-05-12 20:15:08 | 0:41:05 | 0:30:29 | 0:10:36 | smithi | main | rhel | 8.6 | smoke/basic/{clusters/{fixed-3-cephfs openstack} objectstore/bluestore-bitmap s3tests-branch supported-random-distro$/{rhel_8} tasks/{0-install test/kclient_workunit_direct_io}} | 3 |
fail | 7272434 |
![]() ![]() |
2023-05-12 08:25:47 | 2023-05-12 09:05:44 | 2023-05-12 13:35:03 | 4:29:19 | 4:18:27 | 0:10:52 | smithi | main | rhel | 8.6 | fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/legacy wsync/no} objectstore-ec/bluestore-comp omap_limit/10 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{export-check n/3 replication/default} standby-replay tasks/{0-subvolume/{with-namespace-isolated-and-quota} 1-check-counter 2-scrub/yes 3-snaps/yes 4-flush/no 5-workunit/kernel_untar_build}} | 3 |
Failure Reason:
Command failed (workunit test kernel_untar_build.sh) on smithi063 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=5a6b166b7c43e2b5890833e75dd257ab21264ded TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/kernel_untar_build.sh' |
||||||||||||||
fail | 7272417 |
![]() ![]() |
2023-05-12 08:25:36 | 2023-05-12 08:33:22 | 2023-05-12 09:05:47 | 0:32:25 | 0:16:38 | 0:15:47 | smithi | main | ubuntu | 22.04 | fs/multiclient/{begin/{0-install 1-ceph 2-logrotate} clusters/1-mds-3-client conf/{client mds mon osd} distros/ubuntu_latest mount/fuse objectstore-ec/bluestore-comp overrides/{ignorelist_health ignorelist_wrongly_marked_down} tasks/ior-shared-file} | 5 |
Failure Reason:
Command failed on smithi083 with status 2: 'TESTDIR=/home/ubuntu/cephtest bash -s' |