User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Fail | Dead |
---|---|---|---|---|---|---|---|---|---|---|
kchai | 2019-07-25 03:22:38 | 2019-07-25 03:23:27 | 2019-07-25 10:11:11 | 6:47:44 | rados | wip-ceph-mutex-kefu | smithi | ca81d1f | 6 | 1 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
fail | 4146318 | 2019-07-25 03:22:49 | 2019-07-25 03:23:09 | 2019-07-25 03:49:09 | 0:26:00 | 0:13:41 | 0:12:19 | smithi | master | rhel | 7.6 | rados/rest/{mgr-restful.yaml supported-random-distro$/{rhel_7.yaml}} | 1 | |
Failure Reason:
"2019-07-25T03:45:59.936084+0000 mon.a (mon.0) 182 : cluster [WRN] Health check failed: pauserd,pausewr flag(s) set (OSDMAP_FLAGS)" in cluster log |
||||||||||||||
fail | 4146319 | 2019-07-25 03:22:50 | 2019-07-25 03:23:10 | 2019-07-25 04:37:10 | 1:14:00 | 0:19:22 | 0:54:38 | smithi | master | rhel | 7.6 | rados/upgrade/mimic-x-singleton/{0-cluster/{openstack.yaml start.yaml} 1-install/mimic.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-workload/{rbd-cls.yaml rbd-import-export.yaml readwrite.yaml snaps-few-objects.yaml} 5-workload/{radosbench.yaml rbd_api.yaml} 6-finish-upgrade.yaml 7-nautilus.yaml 8-workload/{rbd-python.yaml rgw-swift.yaml snaps-many-objects.yaml} bluestore-bitmap.yaml supported-random-distro$/{rhel_7.yaml} thrashosds-health.yaml} | 4 | |
Failure Reason:
Command failed on smithi129 with status 124: "sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph -- tell 'mon.*' injectargs --mon_health_to_clog=true" |
||||||||||||||
fail | 4146320 | 2019-07-25 03:22:51 | 2019-07-25 03:23:27 | 2019-07-25 03:59:27 | 0:36:00 | 0:29:36 | 0:06:24 | smithi | master | centos | 7.6 | rados/mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/bluestore-comp.yaml supported-random-distro$/{centos_7.yaml} tasks/dashboard.yaml} | 2 | |
Failure Reason:
Test failure: test_create_rbd_twice (tasks.mgr.dashboard.test_rbd.RbdTest) |
||||||||||||||
fail | 4146321 | 2019-07-25 03:22:51 | 2019-07-25 03:25:09 | 2019-07-25 04:09:08 | 0:43:59 | 0:21:04 | 0:22:55 | smithi | master | ubuntu | 18.04 | rados/mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/filestore-xfs.yaml supported-random-distro$/{ubuntu_latest.yaml} tasks/module_selftest.yaml} | 2 | |
Failure Reason:
"2019-07-25T04:00:39.460174+0000 mon.a (mon.0) 2280 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)" in cluster log |
||||||||||||||
fail | 4146322 | 2019-07-25 03:22:52 | 2019-07-25 03:25:09 | 2019-07-25 04:25:08 | 0:59:59 | 0:11:41 | 0:48:18 | smithi | master | centos | 7.6 | rados/mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/bluestore-comp.yaml supported-random-distro$/{centos_7.yaml} tasks/progress.yaml} | 2 | |
Failure Reason:
Test failure: test_osd_came_back (tasks.mgr.test_progress.TestProgress) |
||||||||||||||
dead | 4146323 | 2019-07-25 03:22:53 | 2019-07-25 03:27:05 | 2019-07-25 10:11:11 | 6:44:06 | smithi | master | centos | 7.6 | rados/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-install/hammer.yaml backoff/peering.yaml ceph.yaml clusters/{openstack.yaml three-plus-one.yaml} d-balancer/off.yaml distro$/{centos_latest.yaml} msgr-failures/few.yaml rados.yaml thrashers/default.yaml thrashosds-health.yaml workloads/cache-snaps.yaml} | 4 | |||
fail | 4146324 | 2019-07-25 03:22:54 | 2019-07-25 03:29:30 | 2019-07-25 05:01:31 | 1:32:01 | 1:07:32 | 0:24:29 | smithi | master | centos | rados/verify/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-thrash/none.yaml msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/filestore-xfs.yaml rados.yaml tasks/rados_api_tests.yaml validater/valgrind.yaml} | 2 | ||
Failure Reason:
Command failed on smithi059 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap' |