Name | Machine Type | Up | Locked | Locked Since | Locked By | OS Type | OS Version | Arch | Description |
---|---|---|---|---|---|---|---|---|---|
mira082.front.sepia.ceph.com | mira | True | True | 2023-07-31 15:44:33.107581 | smanjara@teuthology | x86_64 | None |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
dead | 6541132 |
![]() |
2021-12-02 21:13:52 | 2021-12-02 21:13:52 | 2021-12-02 21:13:53 | 0:00:01 | mira | master | rhel | 8.4 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-stupid rados supported-random-distro$/{rhel_8} thrashers/morepggrow thrashosds-health workloads/rados_api_tests} | 2 | ||
Failure Reason:
Error reimaging machines: Could not find an image for rhel 8.4 |
||||||||||||||
dead | 6541040 |
![]() |
2021-12-02 20:48:18 | 2021-12-02 20:49:06 | 2021-12-02 20:49:07 | 0:00:01 | mira | master | rhel | 8.4 | fs/functional/{begin clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{centos_8.stream} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} objectstore/bluestore-bitmap overrides/{no_client_pidfile whitelist_health whitelist_wrongly_marked_down} tasks/client-recovery} | 2 | ||
Failure Reason:
Error reimaging machines: Could not find an image for rhel 8.4 |
||||||||||||||
fail | 6179508 |
![]() ![]() |
2021-06-18 15:30:59 | 2021-06-21 16:44:26 | 2021-06-21 17:19:06 | 0:34:40 | 0:09:45 | 0:24:55 | mira | master | centos | 7.8 | ceph-deploy/{cluster/4node config/ceph_volume_filestore distros/centos_latest python_versions/python_2 tasks/ceph-admin-commands} | 4 |
Failure Reason:
Command failed on mira047 with status 1: 'cd /home/ubuntu/cephtest/ceph-deploy && ./bootstrap 2' |
||||||||||||||
fail | 6179505 |
![]() ![]() |
2021-06-18 15:30:56 | 2021-06-21 15:47:58 | 2021-06-21 16:44:29 | 0:56:31 | 0:28:11 | 0:28:20 | mira | master | ubuntu | 18.04 | ceph-deploy/{cluster/4node config/ceph_volume_bluestore distros/ubuntu_latest python_versions/python_3 tasks/rbd_import_export} | 4 |
Failure Reason:
ceph health was unable to get 'HEALTH_OK' after waiting 15 minutes |
||||||||||||||
dead | 6179502 |
![]() |
2021-06-18 15:30:53 | 2021-06-19 17:51:06 | 2021-06-21 16:02:15 | 1 day, 22:11:09 | mira | master | centos | 7.8 | ceph-deploy/{cluster/4node config/ceph_volume_bluestore_dmcrypt distros/centos_latest python_versions/python_2 tasks/rbd_import_export} | 4 | ||
Failure Reason:
Error reimaging machines: reached maximum tries (100) after waiting for 600 seconds |
||||||||||||||
fail | 6179477 |
![]() ![]() |
2021-06-18 15:30:29 | 2021-06-18 15:30:30 | 2021-06-18 16:10:11 | 0:39:41 | 0:09:50 | 0:29:51 | mira | master | centos | 7.8 | ceph-deploy/{cluster/4node config/ceph_volume_bluestore distros/centos_latest python_versions/python_2 tasks/ceph-admin-commands} | 4 |
Failure Reason:
Command failed on mira066 with status 1: 'cd /home/ubuntu/cephtest/ceph-deploy && ./bootstrap 2' |
||||||||||||||
pass | 6124774 |
![]() |
2021-05-20 10:07:46 | 2021-05-20 10:08:25 | 2021-05-20 10:48:27 | 0:40:02 | 0:26:45 | 0:13:17 | mira | master | ubuntu | 20.04 | rados:cephadm/upgrade/{1-start-distro/1-start-ubuntu_20.04-15.2.9 2-repo_digest/defaut 3-start-upgrade 4-wait mon_election/classic} | 2 |
fail | 6092028 |
![]() ![]() |
2021-05-03 05:55:38 | 2021-05-03 22:57:57 | 2021-05-03 23:51:25 | 0:53:28 | 0:36:10 | 0:17:18 | mira | master | centos | 7.8 | ceph-deploy/{cluster/4node config/ceph_volume_dmcrypt_off distros/centos_latest python_versions/python_3 tasks/ceph-admin-commands} | 4 |
Failure Reason:
ceph health was unable to get 'HEALTH_OK' after waiting 15 minutes |
||||||||||||||
fail | 6092025 |
![]() ![]() |
2021-05-03 05:55:35 | 2021-05-03 07:17:42 | 2021-05-03 17:50:56 | 10:33:14 | 0:30:12 | 10:03:02 | mira | master | ubuntu | 18.04 | ceph-deploy/{cluster/4node config/ceph_volume_filestore distros/ubuntu_latest python_versions/python_3 tasks/ceph-admin-commands} | 4 |
Failure Reason:
ceph health was unable to get 'HEALTH_OK' after waiting 15 minutes |
||||||||||||||
fail | 6092021 |
![]() ![]() |
2021-05-03 05:55:32 | 2021-05-03 05:56:12 | 2021-05-03 06:37:54 | 0:41:42 | 0:28:19 | 0:13:23 | mira | master | ubuntu | 18.04 | ceph-deploy/{cluster/4node config/ceph_volume_filestore distros/ubuntu_latest python_versions/python_3 tasks/rbd_import_export} | 4 |
Failure Reason:
ceph health was unable to get 'HEALTH_OK' after waiting 15 minutes |
||||||||||||||
fail | 6074872 |
![]() ![]() |
2021-04-26 05:55:52 | 2021-04-26 10:02:17 | 2021-04-26 10:53:04 | 0:50:47 | 0:27:56 | 0:22:51 | mira | master | ubuntu | 18.04 | ceph-deploy/{cluster/4node config/ceph_volume_bluestore distros/ubuntu_latest python_versions/python_3 tasks/rbd_import_export} | 4 |
Failure Reason:
ceph health was unable to get 'HEALTH_OK' after waiting 15 minutes |
||||||||||||||
fail | 6074867 |
![]() ![]() |
2021-04-26 05:55:48 | 2021-04-26 09:03:00 | 2021-04-26 10:12:23 | 1:09:23 | 0:37:36 | 0:31:47 | mira | master | centos | 7.8 | ceph-deploy/{cluster/4node config/ceph_volume_filestore distros/centos_latest python_versions/python_3 tasks/ceph-admin-commands} | 4 |
Failure Reason:
ceph health was unable to get 'HEALTH_OK' after waiting 15 minutes |
||||||||||||||
fail | 6074861 |
![]() ![]() |
2021-04-26 05:55:43 | 2021-04-26 08:09:13 | 2021-04-26 09:17:52 | 1:08:39 | 0:36:47 | 0:31:52 | mira | master | centos | 7.8 | ceph-deploy/{cluster/4node config/ceph_volume_bluestore_dmcrypt distros/centos_latest python_versions/python_3 tasks/rbd_import_export} | 4 |
Failure Reason:
ceph health was unable to get 'HEALTH_OK' after waiting 15 minutes |
||||||||||||||
fail | 6074859 |
![]() ![]() |
2021-04-26 05:55:42 | 2021-04-26 07:55:53 | 2021-04-26 08:22:47 | 0:26:54 | 0:05:09 | 0:21:45 | mira | master | ubuntu | 18.04 | ceph-deploy/{cluster/4node config/ceph_volume_filestore distros/ubuntu_latest python_versions/python_2 tasks/ceph-admin-commands} | 4 |
Failure Reason:
Command failed on mira041 with status 1: 'cd /home/ubuntu/cephtest/ceph-deploy && ./bootstrap 2' |
||||||||||||||
fail | 6074852 |
![]() ![]() |
2021-04-26 05:55:36 | 2021-04-26 06:37:54 | 2021-04-26 08:05:11 | 1:27:17 | 0:36:57 | 0:50:20 | mira | master | centos | 7.8 | ceph-deploy/{cluster/4node config/ceph_volume_bluestore distros/centos_latest python_versions/python_3 tasks/ceph-admin-commands} | 4 |
Failure Reason:
ceph health was unable to get 'HEALTH_OK' after waiting 15 minutes |
||||||||||||||
fail | 6074850 |
![]() ![]() |
2021-04-26 05:55:34 | 2021-04-26 06:33:30 | 2021-04-26 07:10:01 | 0:36:31 | 0:09:24 | 0:27:07 | mira | master | centos | 7.8 | ceph-deploy/{cluster/4node config/ceph_volume_dmcrypt_off distros/centos_latest python_versions/python_2 tasks/rbd_import_export} | 4 |
Failure Reason:
Command failed on mira041 with status 1: 'cd /home/ubuntu/cephtest/ceph-deploy && ./bootstrap 2' |
||||||||||||||
fail | 6074845 |
![]() ![]() |
2021-04-26 05:55:29 | 2021-04-26 05:55:36 | 2021-04-26 06:35:25 | 0:39:49 | 0:27:46 | 0:12:03 | mira | master | ubuntu | 18.04 | ceph-deploy/{cluster/4node config/ceph_volume_bluestore_dmcrypt distros/ubuntu_latest python_versions/python_3 tasks/rbd_import_export} | 4 |
Failure Reason:
ceph health was unable to get 'HEALTH_OK' after waiting 15 minutes |
||||||||||||||
fail | 6058636 |
![]() ![]() |
2021-04-19 05:56:05 | 2021-04-19 10:10:22 | 2021-04-19 11:09:54 | 0:59:32 | 0:09:17 | 0:50:15 | mira | master | centos | 7.8 | ceph-deploy/{cluster/4node config/ceph_volume_bluestore_dmcrypt distros/centos_latest python_versions/python_2 tasks/ceph-admin-commands} | 4 |
Failure Reason:
Command failed on mira041 with status 1: 'cd /home/ubuntu/cephtest/ceph-deploy && ./bootstrap 2' |
||||||||||||||
fail | 6058633 |
![]() ![]() |
2021-04-19 05:56:02 | 2021-04-19 09:49:05 | 2021-04-19 10:31:33 | 0:42:28 | 0:27:45 | 0:14:43 | mira | master | ubuntu | 18.04 | ceph-deploy/{cluster/4node config/ceph_volume_dmcrypt_off distros/ubuntu_latest python_versions/python_3 tasks/ceph-admin-commands} | 4 |
Failure Reason:
ceph health was unable to get 'HEALTH_OK' after waiting 15 minutes |
||||||||||||||
fail | 6058628 |
![]() ![]() |
2021-04-19 05:55:57 | 2021-04-19 08:42:40 | 2021-04-19 09:51:26 | 1:08:46 | 0:40:47 | 0:27:59 | mira | master | centos | 7.8 | ceph-deploy/{cluster/4node config/ceph_volume_bluestore_dmcrypt distros/centos_latest python_versions/python_3 tasks/ceph-admin-commands} | 4 |
Failure Reason:
ceph health was unable to get 'HEALTH_OK' after waiting 15 minutes |