User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Pass | Fail |
---|---|---|---|---|---|---|---|---|---|---|
kchai | 2019-09-08 13:41:17 | 2019-09-08 13:43:43 | 2019-09-08 19:47:22 | 6:03:39 | rados | wip-kefu-testing-2019-09-07-0159 | smithi | 0830cb0 | 11 | 9 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
pass | 4289733 | 2019-09-08 13:41:28 | 2019-09-08 13:43:34 | 2019-09-08 14:33:34 | 0:50:00 | 0:26:22 | 0:23:38 | smithi | master | centos | rados/verify/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-thrash/none.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/filestore-xfs.yaml rados.yaml tasks/mon_recovery.yaml validater/valgrind.yaml} | 2 | ||
fail | 4289734 | 2019-09-08 13:41:28 | 2019-09-08 13:43:43 | 2019-09-08 14:09:42 | 0:25:59 | 0:19:02 | 0:06:57 | smithi | master | rhel | 7.6 | rados/standalone/{supported-random-distro$/{rhel_7.yaml} workloads/erasure-code.yaml} | 1 | |
Failure Reason:
Command failed (workunit test erasure-code/test-erasure-code.sh) on smithi110 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=0830cb01ffbdabb8351dee7a0448d8234706bf86 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/standalone/erasure-code/test-erasure-code.sh' |
||||||||||||||
fail | 4289735 | 2019-09-08 13:41:29 | 2019-09-08 13:44:55 | 2019-09-08 14:20:55 | 0:36:00 | 0:15:34 | 0:20:26 | smithi | master | rhel | 7.6 | rados/standalone/{supported-random-distro$/{rhel_7.yaml} workloads/misc.yaml} | 1 | |
Failure Reason:
Command failed (workunit test misc/network-ping.sh) on smithi180 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=0830cb01ffbdabb8351dee7a0448d8234706bf86 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/standalone/misc/network-ping.sh' |
||||||||||||||
pass | 4289736 | 2019-09-08 13:41:30 | 2019-09-08 13:45:33 | 2019-09-08 14:11:32 | 0:25:59 | 0:12:22 | 0:13:37 | smithi | master | ubuntu | 18.04 | rados/singleton/{all/rebuild-mondb.yaml msgr-failures/many.yaml msgr/async-v2only.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml}} | 1 | |
pass | 4289737 | 2019-09-08 13:41:31 | 2019-09-08 13:45:51 | 2019-09-08 14:21:50 | 0:35:59 | 0:19:37 | 0:16:22 | smithi | master | ubuntu | 18.04 | rados/perf/{ceph.yaml objectstore/bluestore-comp.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{ubuntu_latest.yaml} workloads/radosbench_omap_write.yaml} | 1 | |
pass | 4289738 | 2019-09-08 13:41:32 | 2019-09-08 13:50:12 | 2019-09-08 14:12:11 | 0:21:59 | 0:14:37 | 0:07:22 | smithi | master | rhel | 7.6 | rados/singleton/{all/test-crash.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{rhel_7.yaml}} | 1 | |
fail | 4289739 | 2019-09-08 13:41:32 | 2019-09-08 13:53:27 | 2019-09-08 14:35:27 | 0:42:00 | 0:30:18 | 0:11:42 | smithi | master | centos | rados/verify/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-thrash/none.yaml msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/bluestore-comp.yaml rados.yaml tasks/rados_cls_all.yaml validater/valgrind.yaml} | 2 | ||
Failure Reason:
Command failed (workunit test cls/test_cls_rgw.sh) on smithi064 with status 139: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=0830cb01ffbdabb8351dee7a0448d8234706bf86 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_rgw.sh' |
||||||||||||||
fail | 4289740 | 2019-09-08 13:41:33 | 2019-09-08 13:53:56 | 2019-09-08 14:45:56 | 0:52:00 | 0:40:03 | 0:11:57 | smithi | master | centos | 7.6 | rados/standalone/{supported-random-distro$/{centos_7.yaml} workloads/mon.yaml} | 1 | |
Failure Reason:
Command failed (workunit test mon/mon-osdmap-prune.sh) on smithi045 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=0830cb01ffbdabb8351dee7a0448d8234706bf86 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/standalone/mon/mon-osdmap-prune.sh' |
||||||||||||||
pass | 4289741 | 2019-09-08 13:41:34 | 2019-09-08 13:55:44 | 2019-09-08 14:35:43 | 0:39:59 | 0:26:52 | 0:13:07 | smithi | master | centos | 7.6 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{centos_7.yaml} thrashers/mapgap.yaml thrashosds-health.yaml workloads/snaps-few-objects.yaml} | 2 | |
pass | 4289742 | 2019-09-08 13:41:35 | 2019-09-08 13:59:45 | 2019-09-08 14:27:44 | 0:27:59 | 0:20:49 | 0:07:10 | smithi | master | centos | 7.6 | rados/perf/{ceph.yaml objectstore/bluestore-stupid.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{centos_7.yaml} workloads/cosbench_64K_read_write.yaml} | 1 | |
pass | 4289743 | 2019-09-08 13:41:36 | 2019-09-08 13:59:45 | 2019-09-08 14:45:45 | 0:46:00 | 0:29:01 | 0:16:59 | smithi | master | rhel | 7.6 | rados/thrash-erasure-code-isa/{arch/x86_64.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/fastclose.yaml objectstore/bluestore-comp.yaml rados.yaml recovery-overrides/{more-active-recovery.yaml} supported-random-distro$/{rhel_7.yaml} thrashers/none.yaml thrashosds-health.yaml workloads/ec-rados-plugin=isa-k=2-m=1.yaml} | 2 | |
fail | 4289744 | 2019-09-08 13:41:36 | 2019-09-08 14:01:17 | 2019-09-08 14:29:16 | 0:27:59 | 0:14:42 | 0:13:17 | smithi | master | centos | 7.6 | rados/mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/filestore-xfs.yaml supported-random-distro$/{centos_7.yaml} tasks/dashboard.yaml} | 2 | |
Failure Reason:
Test failure: test_cephfs_get (tasks.mgr.dashboard.test_cephfs.CephfsTest) |
||||||||||||||
fail | 4289745 | 2019-09-08 13:41:37 | 2019-09-08 14:01:17 | 2019-09-08 19:47:22 | 5:46:05 | 5:38:48 | 0:07:17 | smithi | master | centos | 7.6 | rados/standalone/{supported-random-distro$/{centos_7.yaml} workloads/osd.yaml} | 1 | |
Failure Reason:
Command failed (workunit test osd/divergent-priors.sh) on smithi062 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=0830cb01ffbdabb8351dee7a0448d8234706bf86 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/standalone/osd/divergent-priors.sh' |
||||||||||||||
pass | 4289746 | 2019-09-08 13:41:38 | 2019-09-08 14:01:18 | 2019-09-08 14:41:17 | 0:39:59 | 0:24:24 | 0:15:35 | smithi | master | centos | 7.6 | rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/many.yaml msgr/async.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{centos_7.yaml} tasks/rados_api_tests.yaml} | 2 | |
pass | 4289747 | 2019-09-08 13:41:39 | 2019-09-08 14:01:20 | 2019-09-08 14:57:19 | 0:55:59 | 0:43:34 | 0:12:25 | smithi | master | centos | rados/verify/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-thrash/none.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore-stupid.yaml rados.yaml tasks/rados_api_tests.yaml validater/valgrind.yaml} | 2 | ||
fail | 4289748 | 2019-09-08 13:41:40 | 2019-09-08 14:01:42 | 2019-09-08 14:49:42 | 0:48:00 | 0:17:05 | 0:30:55 | smithi | master | rhel | 7.6 | rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{rhel_7.yaml} tasks/rados_cls_all.yaml} | 2 | |
Failure Reason:
Command failed (workunit test cls/test_cls_rgw.sh) on smithi085 with status 139: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=0830cb01ffbdabb8351dee7a0448d8234706bf86 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_rgw.sh' |
||||||||||||||
fail | 4289749 | 2019-09-08 13:41:40 | 2019-09-08 14:03:28 | 2019-09-08 14:33:27 | 0:29:59 | 0:17:49 | 0:12:10 | smithi | master | centos | 7.6 | rados/standalone/{supported-random-distro$/{centos_7.yaml} workloads/scrub.yaml} | 1 | |
Failure Reason:
Command failed (workunit test scrub/osd-recovery-scrub.sh) on smithi110 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=0830cb01ffbdabb8351dee7a0448d8234706bf86 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/standalone/scrub/osd-recovery-scrub.sh' |
||||||||||||||
pass | 4289750 | 2019-09-08 13:41:41 | 2019-09-08 14:03:28 | 2019-09-08 14:43:27 | 0:39:59 | 0:22:47 | 0:17:12 | smithi | master | ubuntu | 18.04 | rados/mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/bluestore-low-osd-mem-target.yaml supported-random-distro$/{ubuntu_latest.yaml} tasks/module_selftest.yaml} | 2 | |
pass | 4289751 | 2019-09-08 13:41:42 | 2019-09-08 14:06:36 | 2019-09-08 14:48:36 | 0:42:00 | 0:13:20 | 0:28:40 | smithi | master | centos | 7.6 | rados/mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/bluestore-stupid.yaml supported-random-distro$/{centos_7.yaml} tasks/orchestrator_cli.yaml} | 2 | |
fail | 4289752 | 2019-09-08 13:41:43 | 2019-09-08 14:08:11 | 2019-09-08 14:36:10 | 0:27:59 | 0:11:04 | 0:16:55 | smithi | master | rados/verify/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-thrash/default/{default.yaml thrashosds-health.yaml} msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/filestore-xfs.yaml rados.yaml tasks/rados_cls_all.yaml validater/lockdep.yaml} | 2 | |||
Failure Reason:
Command failed (workunit test cls/test_cls_rbd.sh) on smithi053 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=0830cb01ffbdabb8351dee7a0448d8234706bf86 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_rbd.sh' |