User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Pass | Fail |
---|---|---|---|---|---|---|---|---|---|---|
kchai | 2018-10-22 01:58:26 | 2018-10-22 02:03:42 | 2018-10-22 03:57:40 | 1:53:58 | rados | wip-kefu-testing-2018-10-20-1204 | smithi | a057a63 | 6 | 8 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
fail | 3170184 | 2018-10-22 01:58:31 | 2018-10-22 02:03:42 | 2018-10-22 02:31:41 | 0:27:59 | 0:08:28 | 0:19:31 | smithi | master | centos | 7.4 | rados/singleton/{all/dump-stuck.yaml msgr-failures/few.yaml msgr/simple.yaml objectstore/bluestore.yaml rados.yaml supported-random-distro$/{centos_latest.yaml}} | 1 | |
fail | 3170185 | 2018-10-22 01:58:32 | 2018-10-22 02:09:50 | 2018-10-22 02:41:50 | 0:32:00 | 0:18:18 | 0:13:42 | smithi | master | centos | 7.4 | rados/standalone/{supported-random-distro$/{centos_latest.yaml} workloads/erasure-code.yaml} | 1 | |
Failure Reason:
Command failed (workunit test erasure-code/test-erasure-eio.sh) on smithi091 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=a057a63e97ca1879d18b3d5d5c0fcce12b82ed86 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/standalone/erasure-code/test-erasure-eio.sh' |
||||||||||||||
pass | 3170186 | 2018-10-22 01:58:33 | 2018-10-22 02:15:49 | 2018-10-22 02:41:49 | 0:26:00 | 0:19:27 | 0:06:33 | smithi | master | rhel | 7.5 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/osd-delay.yaml msgr/async.yaml objectstore/filestore-xfs.yaml rados.yaml rocksdb.yaml supported-random-distro$/{rhel_latest.yaml} thrashers/none.yaml thrashosds-health.yaml workloads/cache-snaps.yaml} | 2 | |
pass | 3170187 | 2018-10-22 01:58:34 | 2018-10-22 02:17:38 | 2018-10-22 02:35:38 | 0:18:00 | 0:11:38 | 0:06:22 | smithi | master | rhel | 7.5 | rados/mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/bluestore-comp.yaml supported-random-distro$/{rhel_latest.yaml} tasks/crash.yaml} | 2 | |
pass | 3170188 | 2018-10-22 01:58:34 | 2018-10-22 02:19:45 | 2018-10-22 03:21:46 | 1:02:01 | 0:15:22 | 0:46:39 | smithi | master | rhel | 7.5 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/fastclose.yaml msgr/random.yaml objectstore/bluestore.yaml rados.yaml rocksdb.yaml supported-random-distro$/{rhel_latest.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/redirect_set_object.yaml} | 2 | |
pass | 3170189 | 2018-10-22 01:58:35 | 2018-10-22 02:29:52 | 2018-10-22 03:11:52 | 0:42:00 | 0:31:17 | 0:10:43 | smithi | master | rhel | 7.5 | rados/thrash-erasure-code/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/fast.yaml leveldb.yaml msgr-failures/fastclose.yaml objectstore/bluestore-comp.yaml rados.yaml recovery-overrides/{more-active-recovery.yaml} supported-random-distro$/{rhel_latest.yaml} thrashers/morepggrow.yaml thrashosds-health.yaml workloads/ec-rados-plugin=jerasure-k=2-m=1.yaml} | 2 | |
pass | 3170190 | 2018-10-22 01:58:36 | 2018-10-22 02:29:52 | 2018-10-22 03:15:52 | 0:46:00 | 0:15:15 | 0:30:45 | smithi | master | rhel | 7.5 | rados/thrash-erasure-code-shec/{ceph.yaml clusters/{fixed-4.yaml openstack.yaml} leveldb.yaml msgr-failures/osd-delay.yaml objectstore/bluestore-bitmap.yaml rados.yaml recovery-overrides/{more-active-recovery.yaml} supported-random-distro$/{rhel_latest.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/ec-rados-plugin=shec-k=4-m=3-c=2.yaml} | 4 | |
fail | 3170191 | 2018-10-22 01:58:37 | 2018-10-22 02:31:38 | 2018-10-22 03:03:38 | 0:32:00 | 0:23:36 | 0:08:24 | smithi | master | ubuntu | 18.04 | rados/standalone/{supported-random-distro$/{ubuntu_latest.yaml} workloads/mon.yaml} | 1 | |
Failure Reason:
Command failed (workunit test mon/osd-pool-create.sh) on smithi142 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=a057a63e97ca1879d18b3d5d5c0fcce12b82ed86 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/standalone/mon/osd-pool-create.sh' |
||||||||||||||
pass | 3170192 | 2018-10-22 01:58:38 | 2018-10-22 02:31:42 | 2018-10-22 03:33:43 | 1:02:01 | 0:14:25 | 0:47:36 | smithi | master | rhel | 7.5 | rados/thrash-erasure-code-shec/{ceph.yaml clusters/{fixed-4.yaml openstack.yaml} leveldb.yaml msgr-failures/fastclose.yaml objectstore/bluestore-comp.yaml rados.yaml recovery-overrides/{more-active-recovery.yaml} supported-random-distro$/{rhel_latest.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/ec-rados-plugin=shec-k=4-m=3-c=2.yaml} | 4 | |
fail | 3170193 | 2018-10-22 01:58:39 | 2018-10-22 02:35:39 | 2018-10-22 03:21:39 | 0:46:00 | 0:18:38 | 0:27:22 | smithi | master | ubuntu | 18.04 | rados/mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/bluestore-comp.yaml supported-random-distro$/{ubuntu_latest.yaml} tasks/module_selftest.yaml} | 2 | |
Failure Reason:
"2018-10-22 03:12:35.865433 mgr.y (mgr.5655) 6 : cluster [SEC] foo bar security" in cluster log |
||||||||||||||
fail | 3170194 | 2018-10-22 01:58:39 | 2018-10-22 02:35:39 | 2018-10-22 03:57:40 | 1:22:01 | 1:06:44 | 0:15:17 | smithi | master | ubuntu | 18.04 | rados/standalone/{supported-random-distro$/{ubuntu_latest.yaml} workloads/osd.yaml} | 1 | |
Failure Reason:
Command failed (workunit test osd/osd-rep-recov-eio.sh) on smithi091 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=a057a63e97ca1879d18b3d5d5c0fcce12b82ed86 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/standalone/osd/osd-rep-recov-eio.sh' |
||||||||||||||
fail | 3170195 | 2018-10-22 01:58:40 | 2018-10-22 02:35:39 | 2018-10-22 03:19:39 | 0:44:00 | 0:29:31 | 0:14:29 | smithi | master | centos | 7.4 | rados/standalone/{supported-random-distro$/{centos_latest.yaml} workloads/scrub.yaml} | 1 | |
Failure Reason:
Command failed (workunit test scrub/osd-scrub-repair.sh) on smithi102 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=a057a63e97ca1879d18b3d5d5c0fcce12b82ed86 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/standalone/scrub/osd-scrub-repair.sh' |
||||||||||||||
fail | 3170196 | 2018-10-22 01:58:41 | 2018-10-22 02:37:52 | 2018-10-22 03:35:52 | 0:58:00 | 0:36:06 | 0:21:54 | smithi | master | centos | 7.4 | rados/objectstore/{backends/objectstore.yaml supported-random-distro$/{centos_latest.yaml}} | 1 | |
Failure Reason:
Command failed on smithi097 with status 134: 'sudo TESTDIR=/home/ubuntu/cephtest bash -c \'mkdir $TESTDIR/archive/ostest && cd $TESTDIR/archive/ostest && ulimit -Sn 16384 && CEPH_ARGS="--no-log-to-stderr --log-file $TESTDIR/archive/ceph_test_objectstore.log --debug-filestore 20 --debug-bluestore 20" ceph_test_objectstore --gtest_filter=-*/3 --gtest_catch_exceptions=0\'' |
||||||||||||||
fail | 3170197 | 2018-10-22 01:58:42 | 2018-10-22 02:41:38 | 2018-10-22 03:09:37 | 0:27:59 | 0:07:37 | 0:20:22 | smithi | master | ubuntu | 16.04 | rados/objectstore/{backends/alloc-hint.yaml supported-random-distro$/{ubuntu_16.04.yaml}} | 1 | |
Failure Reason:
Command failed (workunit test rados/test_alloc_hint.sh) on smithi057 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=a057a63e97ca1879d18b3d5d5c0fcce12b82ed86 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test_alloc_hint.sh' |