Name | Machine Type | Up | Locked | Locked Since | Locked By | OS Type | OS Version | Arch | Description |
---|---|---|---|---|---|---|---|---|---|
mira109.front.sepia.ceph.com | mira | False | False | ubuntu | 18.04 | x86_64 | e-waste 26JUN2020 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
pass | 5170788 | 2020-06-22 05:55:56 | 2020-06-22 07:25:56 | 2020-06-22 08:11:57 | 0:46:01 | 0:15:51 | 0:30:10 | mira | master | ubuntu | 18.04 | ceph-deploy/{cluster/4node config/ceph_volume_dmcrypt_off distros/ubuntu_latest python_versions/python_3 tasks/rbd_import_export} | 4 | |
pass | 5170784 | 2020-06-22 05:55:52 | 2020-06-22 07:13:37 | 2020-06-22 07:45:38 | 0:32:01 | 0:15:32 | 0:16:29 | mira | master | ubuntu | 18.04 | ceph-deploy/{cluster/4node config/ceph_volume_dmcrypt_off distros/ubuntu_latest python_versions/python_3 tasks/ceph-admin-commands} | 4 | |
pass | 5170767 | 2020-06-22 05:55:40 | 2020-06-22 06:23:33 | 2020-06-22 07:17:34 | 0:54:01 | 0:22:34 | 0:31:27 | mira | master | centos | 7.8 | ceph-deploy/{cluster/4node config/ceph_volume_bluestore distros/centos_latest python_versions/python_3 tasks/rbd_import_export} | 4 | |
fail | 5170764 | 2020-06-22 05:55:37 | 2020-06-22 05:55:38 | 2020-06-22 06:39:39 | 0:44:01 | 0:04:10 | 0:39:51 | mira | master | ubuntu | 18.04 | ceph-deploy/{cluster/4node config/ceph_volume_bluestore_dmcrypt distros/ubuntu_latest python_versions/python_2 tasks/rbd_import_export} | 4 | |
Failure Reason:
Command failed on mira117 with status 2: 'sudo tar cz -f - -C /var/lib/ceph/mon -- . > /tmp/tmp.GasYFE8Dat' |
||||||||||||||
pass | 5170756 | 2020-06-22 05:55:30 | 2020-06-22 05:55:31 | 2020-06-22 06:23:31 | 0:28:00 | 0:16:04 | 0:11:56 | mira | master | ubuntu | 18.04 | ceph-deploy/{cluster/4node config/ceph_volume_bluestore_dmcrypt distros/ubuntu_latest python_versions/python_3 tasks/rbd_import_export} | 4 | |
fail | 5157755 | 2020-06-17 13:09:18 | 2020-06-17 13:09:25 | 2020-06-17 13:29:24 | 0:19:59 | 0:03:52 | 0:16:07 | mira | master | ubuntu | 20.04 | upgrade:nautilus-x:stress-split/{0-cluster/{openstack.yaml start.yaml} 1-ceph-install/nautilus.yaml 1.1-pg-log-overrides/normal_pg_log.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-workload/{radosbench.yaml rbd-cls.yaml rbd-import-export.yaml rbd_api.yaml readwrite.yaml rgw_ragweed_prepare.yaml snaps-few-objects.yaml} 5-finish-upgrade.yaml 6-octopus.yaml 7-msgr2.yaml 8-final-workload/{rbd-python.yaml snaps-many-objects.yaml} objectstore/bluestore-bitmap.yaml thrashosds-health.yaml ubuntu_latest.yaml} | 5 | |
Failure Reason:
Failed to fetch package version from https://shaman.ceph.com/api/search/?status=ready&project=ceph&flavor=default&distros=ubuntu%2F20.04%2Fx86_64&ref=nautilus |
||||||||||||||
fail | 5157638 | 2020-06-17 12:40:29 | 2020-06-17 12:40:31 | 2020-06-17 13:00:30 | 0:19:59 | 0:04:15 | 0:15:44 | mira | master | ubuntu | 20.04 | upgrade:nautilus-x:stress-split/{0-cluster/{openstack.yaml start.yaml} 1-ceph-install/nautilus.yaml 1.1-pg-log-overrides/normal_pg_log.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-workload/{radosbench.yaml rbd-cls.yaml rbd-import-export.yaml rbd_api.yaml readwrite.yaml rgw_ragweed_prepare.yaml snaps-few-objects.yaml} 5-finish-upgrade.yaml 6-octopus.yaml 7-msgr2.yaml 8-final-workload/{rbd-python.yaml snaps-many-objects.yaml} objectstore/bluestore-bitmap.yaml thrashosds-health.yaml ubuntu_latest.yaml} | 5 | |
Failure Reason:
Failed to fetch package version from https://shaman.ceph.com/api/search/?status=ready&project=ceph&flavor=default&distros=ubuntu%2F20.04%2Fx86_64&ref=nautilus |
||||||||||||||
pass | 5156865 | 2020-06-17 05:10:29 | 2020-06-17 05:10:49 | 2020-06-17 05:32:49 | 0:22:00 | 0:15:09 | 0:06:51 | mira | py2 | rhel | 7.5 | ceph-disk/basic/{distros/rhel_latest tasks/ceph-detect-init} | 1 | |
fail | 5132461 | 2020-06-10 05:10:29 | 2020-06-10 05:10:46 | 2020-06-10 06:06:46 | 0:56:00 | 0:10:41 | 0:45:19 | mira | py2 | ubuntu | 16.04 | ceph-disk/basic/{distros/ubuntu_latest tasks/ceph-disk} | 2 | |
Failure Reason:
Command failed (workunit test ceph-disk/ceph-disk.sh) on mira109 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=8e9675a830b7e7a7164659a2a8f8ad08b8f61358 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/ceph-disk/ceph-disk.sh' |
||||||||||||||
fail | 5129674 | 2020-06-08 17:02:12 | 2020-06-08 18:08:26 | 2020-06-08 18:28:26 | 0:20:00 | 0:06:55 | 0:13:05 | mira | py2 | centos | 7.8 | ceph-deploy/basic/{ceph-deploy-overrides/enable_dmcrypt_diff_journal_disk config_options/cephdeploy_conf distros/centos_7.8 objectstore/bluestore-bitmap python_versions/python_2 tasks/ceph-admin-commands} | 2 | |
Failure Reason:
Command failed on mira041 with status 5: 'sudo stop ceph-all || sudo service ceph stop || sudo systemctl stop ceph.target' |
||||||||||||||
fail | 5129658 | 2020-06-08 17:01:58 | 2020-06-08 17:34:09 | 2020-06-08 18:08:08 | 0:33:59 | 0:07:37 | 0:26:22 | mira | py2 | centos | 7.8 | ceph-deploy/ceph-volume/{cluster/4node config/ceph_volume_filestore distros/centos_7.8 tasks/rbd_import_export} | 4 | |
Failure Reason:
Command failed on mira026 with status 5: 'sudo stop ceph-all || sudo service ceph stop || sudo systemctl stop ceph.target' |
||||||||||||||
fail | 5129649 | 2020-06-08 17:01:50 | 2020-06-08 17:15:50 | 2020-06-08 17:31:49 | 0:15:59 | 0:04:52 | 0:11:07 | mira | py2 | ubuntu | 16.04 | ceph-deploy/ceph-volume/{cluster/4node config/ceph_volume_filestore distros/ubuntu_latest tasks/rbd_import_export} | 4 | |
Failure Reason:
Command failed on mira041 with status 1: 'cd /home/ubuntu/cephtest/ceph-deploy && ./bootstrap' |
||||||||||||||
fail | 5129643 | 2020-06-08 17:01:44 | 2020-06-08 17:01:49 | 2020-06-08 17:15:48 | 0:13:59 | 0:04:07 | 0:09:52 | mira | py2 | ubuntu | 16.04 | ceph-deploy/basic/{ceph-deploy-overrides/enable_dmcrypt_diff_journal_disk config_options/cephdeploy_conf distros/ubuntu_16.04 objectstore/filestore-xfs python_versions/python_3 tasks/ceph-admin-commands} | 2 | |
Failure Reason:
Command failed on mira041 with status 5: 'sudo stop ceph-all || sudo service ceph stop || sudo systemctl stop ceph.target' |
||||||||||||||
fail | 5122571 | 2020-06-06 20:43:22 | 2020-06-06 21:25:55 | 2020-06-06 21:43:54 | 0:17:59 | 0:06:04 | 0:11:55 | mira | master | ubuntu | rgw:verify/{clusters/fixed-2 frontend/civetweb msgr-failures/few objectstore/bluestore-bitmap overrides proto/https rgw_pool_type/ec tasks/{0-install cls ragweed s3tests-java s3tests} validater/lockdep} | 2 | ||
Failure Reason:
No module named 'cStringIO' |
||||||||||||||
fail | 5122556 | 2020-06-06 20:43:09 | 2020-06-06 21:07:43 | 2020-06-06 21:25:42 | 0:17:59 | 0:06:41 | 0:11:18 | mira | master | ubuntu | rgw:verify/{clusters/fixed-2 frontend/civetweb msgr-failures/few objectstore/bluestore-bitmap overrides proto/http rgw_pool_type/ec tasks/{0-install cls ragweed s3tests-java s3tests} validater/valgrind} | 2 | ||
Failure Reason:
No module named 'cStringIO' |
||||||||||||||
fail | 5122538 | 2020-06-06 20:42:52 | 2020-06-06 20:43:08 | 2020-06-06 21:09:07 | 0:25:59 | 0:06:32 | 0:19:27 | mira | master | ubuntu | rgw:verify/{clusters/fixed-2 frontend/civetweb msgr-failures/few objectstore/filestore-xfs overrides proto/https rgw_pool_type/ec tasks/{0-install cls ragweed s3tests-java s3tests} validater/valgrind} | 2 | ||
Failure Reason:
No module named 'cStringIO' |
||||||||||||||
pass | 5115572 | 2020-06-03 05:10:27 | 2020-06-03 05:10:33 | 2020-06-03 05:30:32 | 0:19:59 | 0:08:57 | 0:11:02 | mira | py2 | centos | 7.4 | ceph-disk/basic/{distros/centos_latest tasks/ceph-detect-init} | 1 | |
fail | 5112609 | 2020-06-02 14:42:58 | 2020-06-02 17:15:38 | 2020-06-02 17:29:37 | 0:13:59 | 0:06:47 | 0:07:12 | mira | master | ubuntu | rgw/multifs/{clusters/fixed-2.yaml frontend/civetweb.yaml objectstore/filestore-xfs.yaml overrides.yaml rgw_pool_type/ec.yaml tasks/rgw_bucket_quota.yaml} | 2 | ||
Failure Reason:
Command failed on mira109 with status 1: 'yes | sudo mkfs.xfs -f -i size=2048 -f /dev/sdc' |
||||||||||||||
fail | 5112590 | 2020-06-02 14:42:39 | 2020-06-02 16:43:21 | 2020-06-02 17:15:21 | 0:32:00 | 0:12:44 | 0:19:16 | mira | master | ubuntu | rgw/multifs/{clusters/fixed-2.yaml frontend/civetweb.yaml objectstore/filestore-xfs.yaml overrides.yaml rgw_pool_type/ec.yaml tasks/rgw_user_quota.yaml} | 2 | ||
Failure Reason:
"2020-06-02T17:07:58.969554+0000 mon.b (mon.0) 180 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)" in cluster log |
||||||||||||||
fail | 5112579 | 2020-06-02 14:42:28 | 2020-06-02 16:32:28 | 2020-06-02 16:56:28 | 0:24:00 | 0:06:38 | 0:17:22 | mira | master | ubuntu | 20.04 | rgw/website/{clusters/fixed-2.yaml frontend/civetweb.yaml http.yaml overrides.yaml tasks/s3tests-website.yaml ubuntu_latest.yaml} | 2 | |
Failure Reason:
Command failed on mira109 with status 1: 'yes | sudo mkfs.xfs -f -i size=2048 -f /dev/sdc' |