Name | Machine Type | Up | Locked | Locked Since | Locked By | OS Type | OS Version | Arch | Description |
---|---|---|---|---|---|---|---|---|---|
smithi163.front.sepia.ceph.com | smithi | True | True | 2022-12-19 13:34:11.420676 | mchangir | x86_64 | /home/teuthworker/archive/mchangir-2022-12-16_12:33:24-fs:workload-mds-fix-scrubbing-assertion-failure-for-diri-distro-default-smithi/7120722 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
dead | 7121177 |
![]() |
2022-12-18 05:37:11 | 2022-12-19 13:34:01 | smithi | main | rhel | 8.6 | rados:thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/normal mon_election/connectivity msgr-failures/few objectstore/filestore-xfs rados recovery-overrides/{default} supported-random-distro$/{rhel_8} thrashers/morepggrow thrashosds-health workloads/ec-radosbench} | 2 | ||||
Failure Reason:
Error reimaging machines: 403 Client Error: Forbidden for url: http://fog.front.sepia.ceph.com/fog/host |
||||||||||||||
dead | 7120219 |
![]() |
2022-12-15 20:43:41 | 2022-12-15 20:44:13 | smithi | main | ubuntu | 20.04 | rgw/thrash/{civetweb clusters/fixed-2 install objectstore/bluestore-bitmap thrasher/default thrashosds-health workload/rgw_user_quota} | 2 | ||||
Failure Reason:
Error reimaging machines: HTTPConnectionPool(host='fog.front.sepia.ceph.com', port=80): Max retries exceeded with url: /fog/host (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fdc83742b50>: Failed to establish a new connection: [Errno 113] No route to host')) |
||||||||||||||
dead | 7120208 |
![]() |
2022-12-15 20:43:29 | 2022-12-15 20:43:45 | 2022-12-15 20:43:59 | 0:00:14 | smithi | main | ubuntu | 20.04 | rgw/website/{clusters/fixed-2 frontend/civetweb http overrides tasks/s3tests-website ubuntu_latest} | 2 | ||
Failure Reason:
Error reimaging machines: HTTPConnectionPool(host='fog.front.sepia.ceph.com', port=80): Max retries exceeded with url: /fog/host (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f22419959d0>: Failed to establish a new connection: [Errno 113] No route to host')) |
||||||||||||||
dead | 7120207 |
![]() |
2022-12-15 20:43:28 | 2022-12-15 20:43:45 | 2022-12-15 20:43:48 | 0:00:03 | smithi | main | centos | 8.stream | rgw/verify/{centos_latest clusters/fixed-2 frontend/beast ignore-pg-availability msgr-failures/few objectstore/filestore-xfs overrides proto/http rgw_pool_type/replicated sharding$/{single} striping$/{stripe-equals-chunk} tasks/{0-install cls ragweed reshard s3tests-java s3tests} validater/lockdep} | 2 | ||
Failure Reason:
Error reimaging machines: HTTPConnectionPool(host='fog.front.sepia.ceph.com', port=80): Max retries exceeded with url: /fog/host (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f228ce0cb20>: Failed to establish a new connection: [Errno 113] No route to host')) |
||||||||||||||
dead | 7120180 |
![]() |
2022-12-15 20:42:57 | 2022-12-15 20:43:35 | 2022-12-15 20:43:39 | 0:00:04 | smithi | main | ubuntu | 20.04 | rgw/multisite/{clusters frontend/beast ignore-pg-availability omap_limits overrides realms/three-zone-plus-pubsub tasks/test_multi valgrind} | 2 | ||
Failure Reason:
Error reimaging machines: HTTPConnectionPool(host='fog.front.sepia.ceph.com', port=80): Max retries exceeded with url: /fog/host (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f9ac6b618e0>: Failed to establish a new connection: [Errno 113] No route to host')) |
||||||||||||||
dead | 7120172 |
![]() |
2022-12-15 20:10:36 | 2022-12-15 20:11:05 | 2022-12-15 20:11:07 | 0:00:02 | smithi | main | ubuntu | 20.04 | rgw/multifs/{clusters/fixed-2 frontend/beast ignore-pg-availability objectstore/filestore-xfs overrides rgw_pool_type/replicated s3tests-branch tasks/rgw_multipart_upload ubuntu_latest} | 2 | ||
Failure Reason:
Error reimaging machines: HTTPConnectionPool(host='fog.front.sepia.ceph.com', port=80): Max retries exceeded with url: /fog/host (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f837cf2bb50>: Failed to establish a new connection: [Errno 113] No route to host')) |
||||||||||||||
dead | 7120149 |
![]() |
2022-12-15 20:10:14 | 2022-12-15 20:10:47 | 2022-12-15 20:11:01 | 0:00:14 | smithi | main | ubuntu | 20.04 | rgw/thrash/{clusters/fixed-2 frontend/beast install objectstore/bluestore-bitmap s3tests-branch thrasher/default thrashosds-health ubuntu_latest workload/rgw_s3tests} | 2 | ||
Failure Reason:
Error reimaging machines: HTTPConnectionPool(host='fog.front.sepia.ceph.com', port=80): Max retries exceeded with url: /fog/host (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f02a63258e0>: Failed to establish a new connection: [Errno 113] No route to host')) |
||||||||||||||
dead | 7120133 |
![]() |
2022-12-15 20:09:59 | 2022-12-15 20:10:42 | 2022-12-15 20:10:44 | 0:00:02 | smithi | main | ubuntu | 20.04 | rgw/multifs/{clusters/fixed-2 frontend/beast ignore-pg-availability objectstore/bluestore-bitmap overrides rgw_pool_type/ec-profile s3tests-branch tasks/rgw_multipart_upload ubuntu_latest} | 2 | ||
Failure Reason:
Error reimaging machines: HTTPConnectionPool(host='fog.front.sepia.ceph.com', port=80): Max retries exceeded with url: /fog/host (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f5d2b919910>: Failed to establish a new connection: [Errno 113] No route to host')) |
||||||||||||||
pass | 7118866 |
![]() |
2022-12-15 13:23:00 | 2022-12-15 16:01:55 | 2022-12-15 19:19:47 | 3:17:52 | 3:07:09 | 0:10:43 | smithi | main | rhel | 8.6 | rbd/qemu/{cache/none clusters/{fixed-3 openstack} features/journaling msgr-failures/few objectstore/bluestore-comp-lz4 pool/none supported-random-distro$/{rhel_8} workloads/qemu_xfstests} | 3 |
pass | 7118832 |
![]() |
2022-12-15 13:18:33 | 2022-12-15 15:27:19 | 2022-12-15 16:02:58 | 0:35:39 | 0:25:10 | 0:10:29 | smithi | main | rhel | 8.6 | rbd/librbd/{cache/writeback clusters/{fixed-3 openstack} config/none min-compat-client/octopus msgr-failures/few objectstore/bluestore-comp-snappy pool/replicated-data-pool supported-random-distro$/{rhel_8} workloads/python_api_tests_with_defaults} | 3 |
pass | 7118742 |
![]() |
2022-12-15 13:05:28 | 2022-12-15 13:37:17 | 2022-12-15 15:26:30 | 1:49:13 | 1:35:21 | 0:13:52 | smithi | main | ubuntu | 20.04 | rbd/maintenance/{base/install clusters/{fixed-3 openstack} objectstore/bluestore-hybrid qemu/xfstests supported-random-distro$/{ubuntu_latest} workloads/rebuild_object_map} | 3 |
pass | 7118689 |
![]() |
2022-12-15 11:18:44 | 2022-12-15 11:23:58 | 2022-12-15 13:39:12 | 2:15:14 | 2:04:35 | 0:10:39 | smithi | main | centos | 8.stream | rados:thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/normal mon_election/connectivity msgr-failures/few objectstore/filestore-xfs rados recovery-overrides/{default} supported-random-distro$/{centos_8} thrashers/fastread thrashosds-health workloads/ec-radosbench} | 2 |
fail | 7118584 |
![]() ![]() |
2022-12-15 09:05:34 | 2022-12-15 10:57:35 | 2022-12-15 11:18:38 | 0:21:03 | smithi | main | centos | 8.stream | rados:thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/connectivity msgr-failures/osd-dispatch-delay objectstore/bluestore-stupid rados recovery-overrides/{default} supported-random-distro$/{centos_8} thrashers/careful thrashosds-health workloads/ec-rados-plugin=clay-k=4-m=2} | 3 | ||
Failure Reason:
machine smithi149.front.sepia.ceph.com is locked by scheduled_ksirivad@teuthology, not scheduled_nmordech@teuthology |
||||||||||||||
fail | 7118569 |
![]() ![]() |
2022-12-15 09:03:39 | 2022-12-15 10:38:28 | 2022-12-15 11:03:19 | 0:24:51 | smithi | main | rhel | 8.6 | rados:thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/classic msgr-failures/osd-delay objectstore/bluestore-stupid rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{rhel_8} thrashers/mapgap thrashosds-health workloads/ec-rados-plugin=clay-k=4-m=2} | 3 | ||
Failure Reason:
Command failed on smithi149 with status 100: 'sudo apt-get clean' |
||||||||||||||
fail | 7118558 |
![]() ![]() |
2022-12-15 09:01:43 | 2022-12-15 10:28:07 | 2022-12-15 10:40:51 | 0:12:44 | smithi | main | ubuntu | 20.04 | rados:thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/connectivity msgr-failures/few objectstore/bluestore-comp-lz4 rados recovery-overrides/{more-async-recovery} supported-random-distro$/{ubuntu_latest} thrashers/default thrashosds-health workloads/ec-rados-plugin=clay-k=4-m=2} | 3 | ||
Failure Reason:
Stale jobs detected, aborting. |
||||||||||||||
pass | 7118522 |
![]() |
2022-12-15 08:57:44 | 2022-12-15 09:43:15 | 2022-12-15 10:28:42 | 0:45:27 | 0:31:26 | 0:14:01 | smithi | main | ubuntu | 20.04 | rados:thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/classic msgr-failures/osd-dispatch-delay objectstore/bluestore-bitmap rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{ubuntu_latest} thrashers/careful thrashosds-health workloads/ec-rados-plugin=clay-k=4-m=2} | 3 |
pass | 7118494 |
![]() |
2022-12-15 08:54:31 | 2022-12-15 09:03:51 | 2022-12-15 09:43:52 | 0:40:01 | 0:30:52 | 0:09:09 | smithi | main | centos | 8.stream | rados:thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/classic msgr-failures/fastclose objectstore/bluestore-low-osd-mem-target rados recovery-overrides/{more-async-recovery} supported-random-distro$/{centos_8} thrashers/pggrow thrashosds-health workloads/ec-rados-plugin=clay-k=4-m=2} | 3 |
pass | 7118041 |
![]() |
2022-12-15 06:54:15 | 2022-12-15 08:18:53 | 2022-12-15 09:02:05 | 0:43:12 | 0:32:37 | 0:10:35 | smithi | main | centos | 8.stream | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-comp-lz4 rados tasks/mon_recovery validater/valgrind} | 2 |
dead | 7117981 |
![]() |
2022-12-15 06:46:15 | 2022-12-15 07:45:33 | 2022-12-15 08:17:34 | 0:32:01 | smithi | main | centos | 8.stream | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async-v2only objectstore/filestore-xfs rados tasks/rados_cls_all validater/valgrind} | 2 | ||
Failure Reason:
Error reimaging machines: reached maximum tries (100) after waiting for 600 seconds |
||||||||||||||
dead | 7117961 |
![]() ![]() |
2022-12-15 06:43:15 | 2022-12-15 07:32:09 | 2022-12-15 07:44:55 | 0:12:46 | 0:03:14 | 0:09:32 | smithi | main | ubuntu | 20.04 | rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/normal mon_election/classic msgr-failures/osd-dispatch-delay objectstore/bluestore-comp-snappy rados recovery-overrides/{default} supported-random-distro$/{ubuntu_latest} thrashers/minsize_recovery thrashosds-health workloads/ec-small-objects} | 2 |
Failure Reason:
{'smithi163.front.sepia.ceph.com': {'changed': False, 'msg': 'All items completed', 'results': [{'_ansible_item_label': {'key': 'vg_nvme', 'value': {'pvs': '/dev/nvme0n1'}}, '_ansible_no_log': False, 'ansible_loop_var': 'item', 'changed': False, 'err': " /dev/vg_nvme: already exists in filesystem\n Run `vgcreate --help' for more information.\n", 'failed': True, 'invocation': {'module_args': {'force': False, 'pesize': '4', 'pv_options': '', 'pvresize': False, 'pvs': ['/dev/nvme0n1'], 'state': 'present', 'vg': 'vg_nvme', 'vg_options': ''}}, 'item': {'key': 'vg_nvme', 'value': {'pvs': '/dev/nvme0n1'}}, 'msg': "Creating volume group 'vg_nvme' failed", 'rc': 3}]}, 'smithi135.front.sepia.ceph.com': {'changed': False, 'msg': 'All items completed', 'results': [{'_ansible_item_label': {'key': 'vg_nvme', 'value': {'pvs': '/dev/nvme0n1'}}, '_ansible_no_log': False, 'ansible_loop_var': 'item', 'changed': False, 'err': " /dev/vg_nvme: already exists in filesystem\n Run `vgcreate --help' for more information.\n", 'failed': True, 'invocation': {'module_args': {'force': False, 'pesize': '4', 'pv_options': '', 'pvresize': False, 'pvs': ['/dev/nvme0n1'], 'state': 'present', 'vg': 'vg_nvme', 'vg_options': ''}}, 'item': {'key': 'vg_nvme', 'value': {'pvs': '/dev/nvme0n1'}}, 'msg': "Creating volume group 'vg_nvme' failed", 'rc': 3}]}} |