User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Pass | Fail |
---|---|---|---|---|---|---|---|---|---|---|
kchai | 2019-09-15 15:37:26 | 2019-09-15 15:37:58 | 2019-09-15 18:32:00 | 2:54:02 | rados | wip-kefu-testing-2019-09-15-1533 | mira | 89d6310 | 2 | 8 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
fail | 4311106 | 2019-09-15 15:37:39 | 2019-09-15 15:37:59 | 2019-09-15 16:09:58 | 0:31:59 | 0:19:40 | 0:12:19 | mira | master | centos | 7.6 | rados/mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/bluestore-stupid.yaml supported-random-distro$/{centos_7.yaml} tasks/orchestrator_cli.yaml} | 2 | |
Failure Reason:
"2019-09-15T16:06:46.889283+0000 mds.a (mds.0) 1 : cluster [WRN] evicting unresponsive client mira101:x (4662), after 302.397 seconds" in cluster log |
||||||||||||||
fail | 4311107 | 2019-09-15 15:37:40 | 2019-09-15 15:37:58 | 2019-09-15 16:07:58 | 0:30:00 | 0:19:32 | 0:10:28 | mira | master | centos | 7.6 | rados/standalone/{supported-random-distro$/{centos_7.yaml} workloads/erasure-code.yaml} | 1 | |
Failure Reason:
Command failed (workunit test erasure-code/test-erasure-code.sh) on mira110 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=89d631060fe9116c630d52b252ef94de20b166d0 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/standalone/erasure-code/test-erasure-code.sh' |
||||||||||||||
pass | 4311108 | 2019-09-15 15:37:41 | 2019-09-15 15:37:58 | 2019-09-15 18:32:00 | 2:54:02 | 2:36:30 | 0:17:32 | mira | master | rhel | 7.6 | rados/perf/{ceph.yaml objectstore/bluestore-basic-min-osd-mem-target.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{rhel_7.yaml} workloads/radosbench_omap_write.yaml} | 1 | |
fail | 4311109 | 2019-09-15 15:37:42 | 2019-09-15 15:37:59 | 2019-09-15 16:01:58 | 0:23:59 | 0:14:52 | 0:09:07 | mira | master | ubuntu | 18.04 | rados/perf/{ceph.yaml objectstore/bluestore-comp.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{ubuntu_latest.yaml} workloads/cosbench_64K_read_write.yaml} | 1 | |
Failure Reason:
Command failed on mira027 with status 1: 'find /home/ubuntu/cephtest -ls ; rmdir -- /home/ubuntu/cephtest' |
||||||||||||||
fail | 4311110 | 2019-09-15 15:37:43 | 2019-09-15 15:37:59 | 2019-09-15 16:01:58 | 0:23:59 | 0:15:17 | 0:08:42 | mira | master | ubuntu | 18.04 | rados/perf/{ceph.yaml objectstore/bluestore-low-osd-mem-target.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{ubuntu_latest.yaml} workloads/cosbench_64K_write.yaml} | 1 | |
Failure Reason:
Command failed on mira088 with status 1: 'find /home/ubuntu/cephtest -ls ; rmdir -- /home/ubuntu/cephtest' |
||||||||||||||
fail | 4311111 | 2019-09-15 15:37:44 | 2019-09-15 15:37:59 | 2019-09-15 16:11:59 | 0:34:00 | 0:24:42 | 0:09:18 | mira | master | rhel | 7.6 | rados/mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/bluestore-stupid.yaml supported-random-distro$/{rhel_7.yaml} tasks/dashboard.yaml} | 2 | |
Failure Reason:
Test failure: test_cephfs_get (tasks.mgr.dashboard.test_cephfs.CephfsTest) |
||||||||||||||
fail | 4311112 | 2019-09-15 15:37:45 | 2019-09-15 16:02:11 | 2019-09-15 17:10:11 | 1:08:00 | 0:55:55 | 0:12:05 | mira | master | centos | 7.6 | rados/standalone/{supported-random-distro$/{centos_7.yaml} workloads/mon.yaml} | 1 | |
Failure Reason:
Command failed (workunit test mon/mon-osdmap-prune.sh) on mira027 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=89d631060fe9116c630d52b252ef94de20b166d0 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/standalone/mon/mon-osdmap-prune.sh' |
||||||||||||||
fail | 4311113 | 2019-09-15 15:37:45 | 2019-09-15 16:02:11 | 2019-09-15 16:40:10 | 0:37:59 | 0:15:10 | 0:22:49 | mira | master | centos | 7.6 | rados/standalone/{supported-random-distro$/{centos_7.yaml} workloads/osd.yaml} | 1 | |
Failure Reason:
Command failed (workunit test osd/divergent-priors.sh) on mira088 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=89d631060fe9116c630d52b252ef94de20b166d0 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/standalone/osd/divergent-priors.sh' |
||||||||||||||
pass | 4311114 | 2019-09-15 15:37:46 | 2019-09-15 16:08:16 | 2019-09-15 17:30:15 | 1:21:59 | 1:07:51 | 0:14:08 | mira | master | centos | rados/verify/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-thrash/none.yaml msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/bluestore-comp.yaml rados.yaml tasks/rados_api_tests.yaml validater/valgrind.yaml} | 2 | ||
fail | 4311115 | 2019-09-15 15:37:47 | 2019-09-15 16:10:00 | 2019-09-15 16:44:00 | 0:34:00 | 0:22:07 | 0:11:53 | mira | master | centos | 7.6 | rados/standalone/{supported-random-distro$/{centos_7.yaml} workloads/scrub.yaml} | 1 | |
Failure Reason:
Command failed (workunit test scrub/osd-recovery-scrub.sh) on mira101 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=89d631060fe9116c630d52b252ef94de20b166d0 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/standalone/scrub/osd-recovery-scrub.sh' |