Name | Machine Type | Up | Locked | Locked Since | Locked By | OS Type | OS Version | Arch | Description |
---|---|---|---|---|---|---|---|---|---|
smithi185.front.sepia.ceph.com | smithi | False | True | 2022-08-23 01:40:01.342263 | scheduled_yuriw@teuthology | centos | 8 | x86_64 | Marked down by ceph-cm-ansible due to missing NVMe card 2022-08-23T01:46:40Z |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
fail | 6986124 | 2022-08-22 20:24:07 | 2022-08-22 20:34:32 | 2022-08-23 00:10:37 | 3:36:05 | 3:28:29 | 0:07:36 | smithi | main | centos | 8.stream | rados/standalone/{supported-random-distro$/{centos_8} workloads/osd} | 1 | |
Failure Reason:
Command failed (workunit test osd/repro_long_log.sh) on smithi185 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=7f3fe3baa9155a87a01dfb21efc3f6d35f6a6ebf TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/standalone/osd/repro_long_log.sh' |
||||||||||||||
dead | 6985284 | 2022-08-22 16:29:25 | 2022-08-23 01:38:20 | 2022-08-23 01:49:52 | 0:11:32 | 0:03:11 | 0:08:21 | smithi | main | centos | 8.stream | rados/thrash-old-clients/{0-distro$/{centos_8.stream_container_tools} 0-size-min-size-overrides/2-size-2-min-size 1-install/octopus backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/crush-compat mon_election/classic msgr-failures/osd-delay rados thrashers/none thrashosds-health workloads/test_rbd_api} | 3 | |
Failure Reason:
{'smithi185.front.sepia.ceph.com': {'_ansible_no_log': False, 'changed': False, 'msg': 'Failing rest of playbook due to missing NVMe card'}} |
||||||||||||||
pass | 6985218 | 2022-08-22 16:27:55 | 2022-08-23 01:13:49 | 2022-08-23 01:37:39 | 0:23:50 | 0:15:57 | 0:07:53 | smithi | main | rhel | 8.6 | rados/singleton-nomsgr/{all/version-number-sanity mon_election/connectivity rados supported-random-distro$/{rhel_8}} | 1 | |
pass | 6985191 | 2022-08-22 16:27:20 | 2022-08-23 00:58:35 | 2022-08-23 01:14:14 | 0:15:39 | 0:07:55 | 0:07:44 | smithi | main | centos | 8.stream | rados/multimon/{clusters/21 mon_election/connectivity msgr-failures/many msgr/async-v1only no_pools objectstore/bluestore-comp-zstd rados supported-random-distro$/{centos_8} tasks/mon_clock_no_skews} | 3 | |
pass | 6985128 | 2022-08-22 16:25:58 | 2022-08-23 00:29:40 | 2022-08-23 00:58:50 | 0:29:10 | 0:21:35 | 0:07:35 | smithi | main | ubuntu | 20.04 | rados/thrash-erasure-code-overwrites/{bluestore-bitmap ceph clusters/{fixed-2 openstack} fast/normal mon_election/connectivity msgr-failures/osd-dispatch-delay rados recovery-overrides/{more-async-recovery} supported-random-distro$/{ubuntu_latest} thrashers/pggrow thrashosds-health workloads/ec-small-objects-overwrites} | 2 | |
pass | 6985079 | 2022-08-22 16:25:00 | 2022-08-23 00:10:35 | 2022-08-23 00:30:59 | 0:20:24 | 0:12:39 | 0:07:45 | smithi | main | centos | 8.stream | rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/classic msgr-failures/fastclose objectstore/bluestore-bitmap rados recovery-overrides/{default} supported-random-distro$/{centos_8} thrashers/morepggrow thrashosds-health workloads/ec-rados-plugin=lrc-k=4-m=2-l=3} | 3 | |
pass | 6984973 | 2022-08-22 16:22:51 | 2022-08-22 20:20:40 | 2022-08-22 20:34:53 | 0:14:13 | 0:07:25 | 0:06:48 | smithi | main | ubuntu | 20.04 | rados/monthrash/{ceph clusters/3-mons mon_election/classic msgr-failures/mon-delay msgr/async-v2only objectstore/bluestore-hybrid rados supported-random-distro$/{ubuntu_latest} thrashers/one workloads/rados_5925} | 2 | |
pass | 6984352 | 2022-08-21 20:34:14 | 2022-08-22 18:56:54 | 2022-08-22 20:20:57 | 1:24:03 | 1:16:40 | 0:07:23 | smithi | main | rhel | 8.4 | fs/volumes/{begin clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} objectstore/bluestore-bitmap overrides/{no_client_pidfile whitelist_health whitelist_wrongly_marked_down} tasks/volumes/{overrides test/basic}} | 2 | |
pass | 6984287 | 2022-08-21 20:33:05 | 2022-08-22 18:23:01 | 2022-08-22 18:57:24 | 0:34:23 | 0:26:46 | 0:07:37 | smithi | main | centos | 8.stream | fs/multifs/{begin clusters/1a3s-mds-2c-client conf/{client mds mon osd} distro/{centos_8} mount/fuse objectstore-ec/bluestore-ec-root overrides/{mon-debug whitelist_health whitelist_wrongly_marked_down} tasks/multifs-auth} | 2 | |
pass | 6984168 | 2022-08-21 20:30:13 | 2022-08-22 17:13:43 | 2022-08-22 18:24:00 | 1:10:17 | 1:03:32 | 0:06:45 | smithi | main | ubuntu | 18.04 | rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/mimic-v1only backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{ubuntu_18.04} mon_election/classic msgr-failures/osd-delay rados thrashers/none thrashosds-health workloads/radosbench} | 3 | |
pass | 6984144 | 2022-08-21 20:29:47 | 2022-08-22 17:01:12 | 2022-08-22 17:13:10 | 0:11:58 | 0:05:51 | 0:06:07 | smithi | main | ubuntu | 20.04 | rados/singleton/{all/max-pg-per-osd.from-mon mon_election/classic msgr-failures/many msgr/async-v1only objectstore/bluestore-stupid rados supported-random-distro$/{ubuntu_latest}} | 1 | |
pass | 6984064 | 2022-08-21 20:28:23 | 2022-08-22 16:24:45 | 2022-08-22 17:01:14 | 0:36:29 | 0:29:20 | 0:07:09 | smithi | main | rhel | 8.4 | rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{rhel_8} tasks/rados_workunit_loadgen_mostlyread} | 2 | |
pass | 6983978 | 2022-08-21 20:26:52 | 2022-08-22 15:46:05 | 2022-08-22 16:25:40 | 0:39:35 | 0:32:18 | 0:07:17 | smithi | main | rhel | 8.4 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-active-recovery} 3-scrub-overrides/{default} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/osd-delay msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{rhel_8} thrashers/default thrashosds-health workloads/admin_socket_objecter_requests} | 2 | |
pass | 6983902 | 2022-08-21 20:25:32 | 2022-08-22 15:10:04 | 2022-08-22 15:46:20 | 0:36:16 | 0:26:09 | 0:10:07 | smithi | main | ubuntu | 20.04 | rados/thrash-erasure-code-overwrites/{bluestore-bitmap ceph clusters/{fixed-2 openstack} fast/fast mon_election/classic msgr-failures/osd-delay rados recovery-overrides/{more-async-recovery} supported-random-distro$/{ubuntu_latest} thrashers/morepggrow thrashosds-health workloads/ec-small-objects-fast-read-overwrites} | 2 | |
dead | 6983261 | 2022-08-21 18:26:08 | 2022-08-21 18:26:55 | 2022-08-21 18:27:09 | 0:00:14 | smithi | main | centos | 8.stream | orch:cephadm/osds/{0-distro/centos_8.stream_container_tools_crun 0-nvme-loop 1-start 2-ops/rm-zap-flag} | 2 | |||
Failure Reason:
Error reimaging machines: HTTPConnectionPool(host='fog.front.sepia.ceph.com', port=80): Max retries exceeded with url: /fog/host (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f9fe8cf82e8>: Failed to establish a new connection: [Errno 113] No route to host',)) |
||||||||||||||
dead | 6983258 | 2022-08-21 18:26:05 | 2022-08-21 18:26:54 | 2022-08-21 18:26:58 | 0:00:04 | smithi | main | centos | 8.stream | orch:cephadm/with-work/{0-distro/centos_8.stream_container_tools_crun fixed-2 mode/packaged mon_election/classic msgr/async-v2only start tasks/rados_api_tests} | 2 | |||
Failure Reason:
Error reimaging machines: HTTPConnectionPool(host='fog.front.sepia.ceph.com', port=80): Max retries exceeded with url: /fog/host (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fa931f3e4e0>: Failed to establish a new connection: [Errno 113] No route to host',)) |
||||||||||||||
dead | 6983252 | 2022-08-21 18:25:58 | 2022-08-21 18:26:42 | 2022-08-21 18:26:46 | 0:00:04 | smithi | main | centos | 8.stream | orch:cephadm/upgrade_without_reducing_max_mds/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn syntax} roles tasks/{0-from/pacific 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-inline/no 4-verify} 2-client 3-upgrade-with-workload 4-verify}} | 2 | |||
Failure Reason:
Error reimaging machines: HTTPConnectionPool(host='fog.front.sepia.ceph.com', port=80): Max retries exceeded with url: /fog/host (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fcc4c6e20b8>: Failed to establish a new connection: [Errno 113] No route to host',)) |
||||||||||||||
dead | 6983245 | 2022-08-21 18:25:50 | 2022-08-21 18:26:29 | 2022-08-21 18:26:33 | 0:00:04 | smithi | main | centos | 8.stream | orch:cephadm/smoke-roleless/{0-distro/centos_8.stream_container_tools_crun 0-nvme-loop 1-start 2-services/jaeger 3-final} | 2 | |||
Failure Reason:
Error reimaging machines: HTTPConnectionPool(host='fog.front.sepia.ceph.com', port=80): Max retries exceeded with url: /fog/host (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f83e06332e8>: Failed to establish a new connection: [Errno 113] No route to host',)) |
||||||||||||||
dead | 6983237 | 2022-08-21 12:15:25 | 2022-08-21 13:45:15 | 2022-08-21 13:45:30 | 0:00:15 | smithi | main | rhel | 8.3 | powercycle/osd/{clusters/3osd-1per-target objectstore/bluestore-hybrid powercycle/default supported-all-distro/rhel_8 tasks/cfuse_workunit_suites_pjd thrashosds-health whitelist_health} | 4 | |||
Failure Reason:
Error reimaging machines: HTTPConnectionPool(host='fog.front.sepia.ceph.com', port=80): Max retries exceeded with url: /fog/host (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f4ebdef8e48>: Failed to establish a new connection: [Errno 113] No route to host',)) |
||||||||||||||
dead | 6983235 | 2022-08-21 12:15:22 | 2022-08-21 13:45:05 | 2022-08-21 13:45:20 | 0:00:15 | smithi | main | ubuntu | 20.04 | powercycle/osd/{clusters/3osd-1per-target objectstore/bluestore-comp-zlib powercycle/default supported-all-distro/ubuntu_latest tasks/cfuse_workunit_suites_fsx thrashosds-health whitelist_health} | 4 | |||
Failure Reason:
Error reimaging machines: HTTPConnectionPool(host='fog.front.sepia.ceph.com', port=80): Max retries exceeded with url: /fog/host (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f04048dadd8>: Failed to establish a new connection: [Errno 113] No route to host',)) |