User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Pass | Fail |
---|---|---|---|---|---|---|---|---|---|---|
kchai | 2019-09-13 13:04:52 | 2019-09-13 13:05:19 | 2019-09-13 14:47:19 | 1:42:00 | rados | wip-kefu-testing-2019-09-11-2224 | mira | aeeefb5 | 2 | 7 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
fail | 4303266 | 2019-09-13 13:05:02 | 2019-09-13 13:05:19 | 2019-09-13 13:39:18 | 0:33:59 | 0:24:48 | 0:09:11 | mira | master | rhel | 7.6 | rados/mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/bluestore-stupid.yaml supported-random-distro$/{rhel_7.yaml} tasks/dashboard.yaml} | 2 | |
Failure Reason:
Test failure: test_cephfs_get (tasks.mgr.dashboard.test_cephfs.CephfsTest) |
||||||||||||||
fail | 4303267 | 2019-09-13 13:05:03 | 2019-09-13 13:05:18 | 2019-09-13 14:47:19 | 1:42:01 | 1:33:30 | 0:08:31 | mira | master | rhel | 7.6 | rados/standalone/{supported-random-distro$/{rhel_7.yaml} workloads/osd.yaml} | 1 | |
Failure Reason:
Command failed (workunit test osd/osd-bluefs-volume-ops.sh) on mira075 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=aeeefb50c08911ac144f76a1f57e6ee511c041bb TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/standalone/osd/osd-bluefs-volume-ops.sh' |
||||||||||||||
fail | 4303268 | 2019-09-13 13:05:04 | 2019-09-13 13:05:19 | 2019-09-13 13:49:18 | 0:43:59 | 0:20:34 | 0:23:25 | mira | master | centos | 7.6 | rados/mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/bluestore-low-osd-mem-target.yaml supported-random-distro$/{centos_7.yaml} tasks/orchestrator_cli.yaml} | 2 | |
Failure Reason:
"2019-09-13T13:46:02.896025+0000 mds.c (mds.0) 1 : cluster [WRN] evicting unresponsive client mira065:z (4507), after 300.177 seconds" in cluster log |
||||||||||||||
fail | 4303269 | 2019-09-13 13:05:05 | 2019-09-13 13:05:19 | 2019-09-13 13:29:18 | 0:23:59 | 0:14:59 | 0:09:00 | mira | master | ubuntu | 18.04 | rados/perf/{ceph.yaml objectstore/bluestore-stupid.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{ubuntu_latest.yaml} workloads/cosbench_64K_read_write.yaml} | 1 | |
Failure Reason:
Command failed on mira063 with status 1: 'find /home/ubuntu/cephtest -ls ; rmdir -- /home/ubuntu/cephtest' |
||||||||||||||
fail | 4303270 | 2019-09-13 13:05:06 | 2019-09-13 13:05:20 | 2019-09-13 14:15:20 | 1:10:00 | 0:56:33 | 0:13:27 | mira | master | centos | rados/verify/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-thrash/none.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/filestore-xfs.yaml rados.yaml tasks/rados_api_tests.yaml validater/valgrind.yaml} | 2 | ||
Failure Reason:
"2019-09-13T13:55:38.457383+0000 mon.b (mon.0) 1473 : cluster [WRN] Health check failed: Long heartbeat ping times on back interface seen, longest is 2058.582 msec (OSD_SLOW_PING_TIME_BACK)" in cluster log |
||||||||||||||
fail | 4303271 | 2019-09-13 13:05:06 | 2019-09-13 13:09:24 | 2019-09-13 13:45:23 | 0:35:59 | 0:23:42 | 0:12:17 | mira | master | centos | 7.6 | rados/perf/{ceph.yaml objectstore/bluestore-bitmap.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{centos_7.yaml} workloads/cosbench_64K_write.yaml} | 1 | |
Failure Reason:
Command failed on mira064 with status 1: 'find /home/ubuntu/cephtest -ls ; rmdir -- /home/ubuntu/cephtest' |
||||||||||||||
pass | 4303272 | 2019-09-13 13:05:07 | 2019-09-13 13:13:23 | 2019-09-13 14:01:23 | 0:48:00 | 0:39:27 | 0:08:33 | mira | master | rhel | 7.6 | rados/thrash-erasure-code/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/fast.yaml msgr-failures/few.yaml objectstore/bluestore-stupid.yaml rados.yaml recovery-overrides/{more-active-recovery.yaml} supported-random-distro$/{rhel_7.yaml} thrashers/fastread.yaml thrashosds-health.yaml workloads/ec-rados-plugin=jerasure-k=3-m=1.yaml} | 2 | |
pass | 4303273 | 2019-09-13 13:05:08 | 2019-09-13 13:15:22 | 2019-09-13 14:11:22 | 0:56:00 | 0:36:06 | 0:19:54 | mira | master | centos | 7.6 | rados/thrash-erasure-code-isa/{arch/x86_64.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/fastclose.yaml objectstore/bluestore-stupid.yaml rados.yaml recovery-overrides/{more-active-recovery.yaml} supported-random-distro$/{centos_7.yaml} thrashers/none.yaml thrashosds-health.yaml workloads/ec-rados-plugin=isa-k=2-m=1.yaml} | 2 | |
fail | 4303274 | 2019-09-13 13:05:09 | 2019-09-13 13:29:20 | 2019-09-13 13:51:19 | 0:21:59 | 0:12:11 | 0:09:48 | mira | master | ubuntu | 18.04 | rados/mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/bluestore-low-osd-mem-target.yaml supported-random-distro$/{ubuntu_latest.yaml} tasks/dashboard.yaml} | 2 | |
Failure Reason:
Test failure: test_cephfs_get (tasks.mgr.dashboard.test_cephfs.CephfsTest) |