Name | Machine Type | Up | Locked | Locked Since | Locked By | OS Type | OS Version | Arch | Description |
---|---|---|---|---|---|---|---|---|---|
mira062.front.sepia.ceph.com | mira | True | True | 2020-06-28 21:20:20.110833 | shraddhaag@teuthology | ubuntu | 18.04 | x86_64 | None |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
fail | 5132461 | 2020-06-10 05:10:29 | 2020-06-10 05:10:46 | 2020-06-10 06:06:46 | 0:56:00 | 0:10:41 | 0:45:19 | mira | py2 | ubuntu | 16.04 | ceph-disk/basic/{distros/ubuntu_latest tasks/ceph-disk} | 2 | |
Failure Reason:
Command failed (workunit test ceph-disk/ceph-disk.sh) on mira109 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=8e9675a830b7e7a7164659a2a8f8ad08b8f61358 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/ceph-disk/ceph-disk.sh' |
||||||||||||||
fail | 5122571 | 2020-06-06 20:43:22 | 2020-06-06 21:25:55 | 2020-06-06 21:43:54 | 0:17:59 | 0:06:04 | 0:11:55 | mira | master | ubuntu | rgw:verify/{clusters/fixed-2 frontend/civetweb msgr-failures/few objectstore/bluestore-bitmap overrides proto/https rgw_pool_type/ec tasks/{0-install cls ragweed s3tests-java s3tests} validater/lockdep} | 2 | ||
Failure Reason:
No module named 'cStringIO' |
||||||||||||||
fail | 5122556 | 2020-06-06 20:43:09 | 2020-06-06 21:07:43 | 2020-06-06 21:25:42 | 0:17:59 | 0:06:41 | 0:11:18 | mira | master | ubuntu | rgw:verify/{clusters/fixed-2 frontend/civetweb msgr-failures/few objectstore/bluestore-bitmap overrides proto/http rgw_pool_type/ec tasks/{0-install cls ragweed s3tests-java s3tests} validater/valgrind} | 2 | ||
Failure Reason:
No module named 'cStringIO' |
||||||||||||||
fail | 5122538 | 2020-06-06 20:42:52 | 2020-06-06 20:43:08 | 2020-06-06 21:09:07 | 0:25:59 | 0:06:32 | 0:19:27 | mira | master | ubuntu | rgw:verify/{clusters/fixed-2 frontend/civetweb msgr-failures/few objectstore/filestore-xfs overrides proto/https rgw_pool_type/ec tasks/{0-install cls ragweed s3tests-java s3tests} validater/valgrind} | 2 | ||
Failure Reason:
No module named 'cStringIO' |
||||||||||||||
fail | 5115573 | 2020-06-03 05:10:28 | 2020-06-03 05:10:33 | 2020-06-03 05:54:33 | 0:44:00 | 0:30:02 | 0:13:58 | mira | py2 | rhel | 7.5 | ceph-disk/basic/{distros/rhel_latest tasks/ceph-disk} | 2 | |
Failure Reason:
Command failed (workunit test ceph-disk/ceph-disk.sh) on mira119 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=5b9a0fcc4e3d178d3f3597e5a5dff2b061c58de5 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/ceph-disk/ceph-disk.sh' |
||||||||||||||
fail | 5112592 | 2020-06-02 14:42:41 | 2020-06-02 16:51:47 | 2020-06-02 17:25:47 | 0:34:00 | 0:15:14 | 0:18:46 | mira | master | ubuntu | rgw/multifs/{clusters/fixed-2.yaml frontend/civetweb.yaml objectstore/bluestore-bitmap.yaml overrides.yaml rgw_pool_type/replicated.yaml tasks/rgw_bucket_quota.yaml} | 2 | ||
Failure Reason:
"2020-06-02T17:19:22.020636+0000 mon.a (mon.0) 193 : cluster [WRN] Health check failed: 6 slow ops, oldest one blocked for 32 sec, mon.b has slow ops (SLOW_OPS)" in cluster log |
||||||||||||||
fail | 5112579 | 2020-06-02 14:42:28 | 2020-06-02 16:32:28 | 2020-06-02 16:56:28 | 0:24:00 | 0:06:38 | 0:17:22 | mira | master | ubuntu | 20.04 | rgw/website/{clusters/fixed-2.yaml frontend/civetweb.yaml http.yaml overrides.yaml tasks/s3tests-website.yaml ubuntu_latest.yaml} | 2 | |
Failure Reason:
Command failed on mira109 with status 1: 'yes | sudo mkfs.xfs -f -i size=2048 -f /dev/sdc' |
||||||||||||||
fail | 5112556 | 2020-06-02 14:42:06 | 2020-06-02 15:27:30 | 2020-06-02 16:33:30 | 1:06:00 | 0:05:51 | 1:00:09 | mira | master | ubuntu | rgw/multifs/{clusters/fixed-2.yaml frontend/civetweb.yaml objectstore/bluestore-bitmap.yaml overrides.yaml rgw_pool_type/ec.yaml tasks/rgw_bucket_quota.yaml} | 2 | ||
Failure Reason:
Command failed on mira109 with status 1: 'yes | sudo mkfs.xfs -f -i size=2048 -f /dev/sdc' |
||||||||||||||
fail | 5112540 | 2020-06-02 14:41:50 | 2020-06-02 15:00:08 | 2020-06-02 15:28:08 | 0:28:00 | 0:10:02 | 0:17:58 | mira | master | ubuntu | 20.04 | rgw/singleton/{all/radosgw-admin.yaml frontend/civetweb.yaml objectstore/filestore-xfs.yaml overrides.yaml rgw_pool_type/ec.yaml supported-random-distro$/{ubuntu_latest.yaml}} | 2 | |
Failure Reason:
"2020-06-02T15:21:40.340754+0000 mon.a (mon.0) 148 : cluster [WRN] Health check failed: Reduced data availability: 4 pgs inactive, 4 pgs peering (PG_AVAILABILITY)" in cluster log |
||||||||||||||
fail | 5112531 | 2020-06-02 14:41:40 | 2020-06-02 14:46:24 | 2020-06-02 15:00:24 | 0:14:00 | 0:06:21 | 0:07:39 | mira | master | rgw/multifs/{clusters/fixed-2.yaml frontend/civetweb.yaml objectstore/bluestore-bitmap.yaml overrides.yaml rgw_pool_type/replicated.yaml tasks/rgw_ragweed.yaml} | 2 | |||
Failure Reason:
Command failed on mira109 with status 1: 'yes | sudo mkfs.xfs -f -i size=2048 -f /dev/sdc' |
||||||||||||||
pass | 5109663 | 2020-06-01 05:55:41 | 2020-06-01 15:12:20 | 2020-06-01 19:04:25 | 3:52:05 | 2:27:26 | 1:24:39 | mira | py2 | centos | 7.8 | ceph-deploy/{cluster/4node config/ceph_volume_bluestore_dmcrypt distros/centos_latest python_versions/python_2 tasks/ceph-admin-commands} | 4 | |
fail | 5101998 | 2020-05-29 05:55:54 | 2020-05-29 06:47:39 | 2020-05-29 07:19:39 | 0:32:00 | 0:07:55 | 0:24:05 | mira | py2 | centos | 7.4 | ceph-deploy/basic/{ceph-deploy-overrides/disable_diff_journal_disk config_options/cephdeploy_conf distros/centos_7.4 objectstore/bluestore-bitmap python_versions/python_3 tasks/ceph-admin-commands} | 2 | |
Failure Reason:
Command failed on mira062 with status 5: 'sudo stop ceph-all || sudo service ceph stop || sudo systemctl stop ceph.target' |
||||||||||||||
pass | 5101970 | 2020-05-29 05:55:28 | 2020-05-29 05:55:31 | 2020-05-29 06:47:32 | 0:52:01 | 0:22:25 | 0:29:36 | mira | py2 | centos | 7.4 | ceph-deploy/ceph-volume/{cluster/4node config/ceph_volume_dmcrypt_off distros/centos_latest tasks/rbd_import_export} | 4 | |
fail | 5094202 | 2020-05-27 05:10:28 | 2020-05-27 05:10:44 | 2020-05-27 05:48:44 | 0:38:00 | 0:14:11 | 0:23:49 | mira | py2 | centos | 7.4 | ceph-disk/basic/{distros/centos_latest tasks/ceph-disk} | 2 | |
Failure Reason:
Command failed (workunit test ceph-disk/ceph-disk.sh) on mira063 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=5b9a0fcc4e3d178d3f3597e5a5dff2b061c58de5 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/ceph-disk/ceph-disk.sh' |
||||||||||||||
pass | 5092964 | 2020-05-26 16:09:58 | 2020-05-26 17:05:59 | 2020-05-26 18:06:00 | 1:00:01 | 0:28:47 | 0:31:14 | mira | py2 | centos | 7.8 | ceph-deploy/{cluster/4node.yaml config/ceph_volume_bluestore_dmcrypt.yaml distros/centos_latest.yaml python_versions/python_3.yaml tasks/ceph-admin-commands.yaml} | 4 | |
pass | 5092943 | 2020-05-26 16:09:38 | 2020-05-26 16:09:40 | 2020-05-26 17:15:40 | 1:06:00 | 0:31:49 | 0:34:11 | mira | py2 | centos | 7.8 | ceph-deploy/{cluster/4node.yaml config/ceph_volume_bluestore.yaml distros/centos_latest.yaml python_versions/python_2.yaml tasks/ceph-admin-commands.yaml} | 4 | |
pass | 5091007 | 2020-05-25 05:55:43 | 2020-05-25 06:40:08 | 2020-05-25 16:12:23 | 9:32:15 | 0:14:51 | 9:17:24 | mira | py2 | ubuntu | 18.04 | ceph-deploy/{cluster/4node.yaml config/ceph_volume_bluestore.yaml distros/ubuntu_latest.yaml python_versions/python_2.yaml tasks/rbd_import_export.yaml} | 4 | |
fail | 5091002 | 2020-05-25 05:55:39 | 2020-05-25 06:24:06 | 2020-05-25 07:00:05 | 0:35:59 | 0:11:13 | 0:24:46 | mira | py2 | ubuntu | 18.04 | ceph-deploy/{cluster/4node.yaml config/ceph_volume_filestore.yaml distros/ubuntu_latest.yaml python_versions/python_2.yaml tasks/ceph-admin-commands.yaml} | 4 | |
Failure Reason:
ceph-deploy: Failed to create osds |
||||||||||||||
pass | 5090996 | 2020-05-25 05:55:34 | 2020-05-25 05:55:46 | 2020-05-25 06:31:47 | 0:36:01 | 0:15:32 | 0:20:29 | mira | py2 | ubuntu | 18.04 | ceph-deploy/{cluster/4node.yaml config/ceph_volume_bluestore_dmcrypt.yaml distros/ubuntu_latest.yaml python_versions/python_2.yaml tasks/rbd_import_export.yaml} | 4 | |
fail | 5072536 | 2020-05-20 07:53:29 | 2020-05-20 20:24:56 | 2020-05-21 01:01:02 | 4:36:06 | 0:08:26 | 4:27:40 | mira | master | ubuntu | rgw/thrash/{civetweb.yaml clusters/fixed-2.yaml install.yaml objectstore/bluestore-bitmap.yaml thrasher/default.yaml thrashosds-health.yaml workload/rgw_user_quota.yaml} | 2 | ||
Failure Reason:
Command failed on mira109 with status 1: 'yes | sudo mkfs.xfs -f -i size=2048 -f /dev/sdc' |