Name | Machine Type | Up | Locked | Locked Since | Locked By | OS Type | OS Version | Arch | Description |
---|---|---|---|---|---|---|---|---|---|
mira084.front.sepia.ceph.com | mira | True | False | x86_64 | None |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
dead | 6541133 |
![]() |
2021-12-02 21:13:53 | 2021-12-02 21:13:53 | 2021-12-02 21:28:55 | 0:15:02 | mira | master | ubuntu | 20.04 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{default} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/osd-delay msgr/async-v1only objectstore/bluestore-hybrid rados supported-random-distro$/{ubuntu_latest} thrashers/none thrashosds-health workloads/rados_api_tests} | 2 | ||
Failure Reason:
Error reimaging machines: reached maximum tries (60) after waiting for 900 seconds |
||||||||||||||
dead | 6541039 |
![]() |
2021-12-02 20:48:17 | 2021-12-02 20:49:05 | 2021-12-02 20:49:07 | 0:00:02 | mira | master | centos | 8.3 | fs/functional/{begin clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/testing/{flavor/centos_latest k-testing} ms-die-on-skipped}} objectstore/bluestore-ec-root overrides/{no_client_pidfile whitelist_health whitelist_wrongly_marked_down} tasks/client-recovery} | 2 | ||
Failure Reason:
Error reimaging machines: Could not find an image for centos 8.3 |
||||||||||||||
dead | 6179507 |
![]() |
2021-06-18 15:30:58 | 2021-06-21 16:05:32 | 2021-06-21 16:48:24 | 0:42:52 | mira | master | ubuntu | 18.04 | ceph-deploy/{cluster/4node config/ceph_volume_dmcrypt_off distros/ubuntu_latest python_versions/python_3 tasks/rbd_import_export} | 4 | ||
Failure Reason:
Error reimaging machines: Failed to power on mira053 |
||||||||||||||
fail | 6179506 |
![]() ![]() |
2021-06-18 15:30:57 | 2021-06-21 16:02:31 | 2021-06-21 16:44:23 | 0:41:52 | 0:09:34 | 0:32:18 | mira | master | centos | 7.8 | ceph-deploy/{cluster/4node config/ceph_volume_bluestore_dmcrypt distros/centos_latest python_versions/python_2 tasks/ceph-admin-commands} | 4 |
Failure Reason:
Command failed on mira065 with status 1: 'cd /home/ubuntu/cephtest/ceph-deploy && ./bootstrap 2' |
||||||||||||||
dead | 6179503 |
![]() |
2021-06-18 15:30:54 | 2021-06-21 15:47:57 | 2021-06-21 16:05:16 | 0:17:19 | mira | master | ubuntu | 18.04 | ceph-deploy/{cluster/4node config/ceph_volume_dmcrypt_off distros/ubuntu_latest python_versions/python_3 tasks/ceph-admin-commands} | 4 | ||
Failure Reason:
Error reimaging machines: reached maximum tries (100) after waiting for 600 seconds |
||||||||||||||
fail | 6179479 |
![]() ![]() |
2021-06-18 15:30:31 | 2021-06-18 15:30:32 | 2021-06-18 15:59:01 | 0:28:29 | 0:09:32 | 0:18:57 | mira | master | centos | 7.8 | ceph-deploy/{cluster/4node config/ceph_volume_dmcrypt_off distros/centos_latest python_versions/python_2 tasks/ceph-admin-commands} | 4 |
Failure Reason:
Command failed on mira026 with status 1: 'cd /home/ubuntu/cephtest/ceph-deploy && ./bootstrap 2' |
||||||||||||||
dead | 6124785 |
![]() |
2021-05-20 10:15:17 | 2021-05-20 10:15:18 | 2021-05-20 10:30:22 | 0:15:04 | mira | master | rhel | 8.3 | rados:cephadm/smoke/{distro/rhel_8.3_kubic_stable fixed-2 mon_election/classic start} | 2 | ||
Failure Reason:
Error reimaging machines: reached maximum tries (60) after waiting for 900 seconds |
||||||||||||||
fail | 6092032 |
![]() ![]() |
2021-05-03 05:55:41 | 2021-05-03 23:51:35 | 2021-05-04 00:49:52 | 0:58:17 | 0:36:44 | 0:21:33 | mira | master | centos | 7.8 | ceph-deploy/{cluster/4node config/ceph_volume_dmcrypt_off distros/centos_latest python_versions/python_3 tasks/rbd_import_export} | 4 |
Failure Reason:
ceph health was unable to get 'HEALTH_OK' after waiting 15 minutes |
||||||||||||||
fail | 6092030 |
![]() ![]() |
2021-05-03 05:55:39 | 2021-05-03 22:57:58 | 2021-05-03 23:56:24 | 0:58:26 | 0:36:14 | 0:22:12 | mira | master | centos | 7.8 | ceph-deploy/{cluster/4node config/ceph_volume_bluestore distros/centos_latest python_versions/python_3 tasks/rbd_import_export} | 4 |
Failure Reason:
ceph health was unable to get 'HEALTH_OK' after waiting 15 minutes |
||||||||||||||
dead | 6092027 |
![]() |
2021-05-03 05:55:37 | 2021-05-03 22:57:57 | 2021-05-03 23:02:28 | 0:04:31 | mira | master | ubuntu | 18.04 | ceph-deploy/{cluster/4node config/ceph_volume_bluestore_dmcrypt distros/ubuntu_latest python_versions/python_2 tasks/rbd_import_export} | 4 | ||
Failure Reason:
Error reimaging machines: Failed to power on mira053 |
||||||||||||||
fail | 6092021 |
![]() ![]() |
2021-05-03 05:55:32 | 2021-05-03 05:56:12 | 2021-05-03 06:37:54 | 0:41:42 | 0:28:19 | 0:13:23 | mira | master | ubuntu | 18.04 | ceph-deploy/{cluster/4node config/ceph_volume_filestore distros/ubuntu_latest python_versions/python_3 tasks/rbd_import_export} | 4 |
Failure Reason:
ceph health was unable to get 'HEALTH_OK' after waiting 15 minutes |
||||||||||||||
fail | 6074872 |
![]() ![]() |
2021-04-26 05:55:52 | 2021-04-26 10:02:17 | 2021-04-26 10:53:04 | 0:50:47 | 0:27:56 | 0:22:51 | mira | master | ubuntu | 18.04 | ceph-deploy/{cluster/4node config/ceph_volume_bluestore distros/ubuntu_latest python_versions/python_3 tasks/rbd_import_export} | 4 |
Failure Reason:
ceph health was unable to get 'HEALTH_OK' after waiting 15 minutes |
||||||||||||||
fail | 6074867 |
![]() ![]() |
2021-04-26 05:55:48 | 2021-04-26 09:03:00 | 2021-04-26 10:12:23 | 1:09:23 | 0:37:36 | 0:31:47 | mira | master | centos | 7.8 | ceph-deploy/{cluster/4node config/ceph_volume_filestore distros/centos_latest python_versions/python_3 tasks/ceph-admin-commands} | 4 |
Failure Reason:
ceph health was unable to get 'HEALTH_OK' after waiting 15 minutes |
||||||||||||||
fail | 6074861 |
![]() ![]() |
2021-04-26 05:55:43 | 2021-04-26 08:09:13 | 2021-04-26 09:17:52 | 1:08:39 | 0:36:47 | 0:31:52 | mira | master | centos | 7.8 | ceph-deploy/{cluster/4node config/ceph_volume_bluestore_dmcrypt distros/centos_latest python_versions/python_3 tasks/rbd_import_export} | 4 |
Failure Reason:
ceph health was unable to get 'HEALTH_OK' after waiting 15 minutes |
||||||||||||||
fail | 6074859 |
![]() ![]() |
2021-04-26 05:55:42 | 2021-04-26 07:55:53 | 2021-04-26 08:22:47 | 0:26:54 | 0:05:09 | 0:21:45 | mira | master | ubuntu | 18.04 | ceph-deploy/{cluster/4node config/ceph_volume_filestore distros/ubuntu_latest python_versions/python_2 tasks/ceph-admin-commands} | 4 |
Failure Reason:
Command failed on mira041 with status 1: 'cd /home/ubuntu/cephtest/ceph-deploy && ./bootstrap 2' |
||||||||||||||
fail | 6074852 |
![]() ![]() |
2021-04-26 05:55:36 | 2021-04-26 06:37:54 | 2021-04-26 08:05:11 | 1:27:17 | 0:36:57 | 0:50:20 | mira | master | centos | 7.8 | ceph-deploy/{cluster/4node config/ceph_volume_bluestore distros/centos_latest python_versions/python_3 tasks/ceph-admin-commands} | 4 |
Failure Reason:
ceph health was unable to get 'HEALTH_OK' after waiting 15 minutes |
||||||||||||||
fail | 6074850 |
![]() ![]() |
2021-04-26 05:55:34 | 2021-04-26 06:33:30 | 2021-04-26 07:10:01 | 0:36:31 | 0:09:24 | 0:27:07 | mira | master | centos | 7.8 | ceph-deploy/{cluster/4node config/ceph_volume_dmcrypt_off distros/centos_latest python_versions/python_2 tasks/rbd_import_export} | 4 |
Failure Reason:
Command failed on mira041 with status 1: 'cd /home/ubuntu/cephtest/ceph-deploy && ./bootstrap 2' |
||||||||||||||
fail | 6074845 |
![]() ![]() |
2021-04-26 05:55:29 | 2021-04-26 05:55:36 | 2021-04-26 06:35:25 | 0:39:49 | 0:27:46 | 0:12:03 | mira | master | ubuntu | 18.04 | ceph-deploy/{cluster/4node config/ceph_volume_bluestore_dmcrypt distros/ubuntu_latest python_versions/python_3 tasks/rbd_import_export} | 4 |
Failure Reason:
ceph health was unable to get 'HEALTH_OK' after waiting 15 minutes |
||||||||||||||
fail | 6058636 |
![]() ![]() |
2021-04-19 05:56:05 | 2021-04-19 10:10:22 | 2021-04-19 11:09:54 | 0:59:32 | 0:09:17 | 0:50:15 | mira | master | centos | 7.8 | ceph-deploy/{cluster/4node config/ceph_volume_bluestore_dmcrypt distros/centos_latest python_versions/python_2 tasks/ceph-admin-commands} | 4 |
Failure Reason:
Command failed on mira041 with status 1: 'cd /home/ubuntu/cephtest/ceph-deploy && ./bootstrap 2' |
||||||||||||||
fail | 6058633 |
![]() ![]() |
2021-04-19 05:56:02 | 2021-04-19 09:49:05 | 2021-04-19 10:31:33 | 0:42:28 | 0:27:45 | 0:14:43 | mira | master | ubuntu | 18.04 | ceph-deploy/{cluster/4node config/ceph_volume_dmcrypt_off distros/ubuntu_latest python_versions/python_3 tasks/ceph-admin-commands} | 4 |
Failure Reason:
ceph health was unable to get 'HEALTH_OK' after waiting 15 minutes |