User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Pass | Fail |
---|---|---|---|---|---|---|---|---|---|---|
yuriw | 2019-02-27 17:20:44 | 2019-02-27 17:43:48 | 2019-02-27 21:08:37 | 3:24:49 | rados | wip-yuri3-testing-2019-02-25-2101-luminous | smithi | 34a20fc | 1 | 13 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
fail | 3645493 | 2019-02-27 17:20:50 | 2019-02-27 17:43:39 | 2019-02-27 17:51:38 | 0:07:59 | smithi | master | ubuntu | 14.04 | rados/thrash-erasure-code-isa/{arch/x86_64.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-end.yaml leveldb.yaml msgr-failures/few.yaml objectstore/bluestore.yaml rados.yaml recovery-overrides/{default.yaml} supported/ubuntu_14.04.yaml thrashers/pggrow.yaml thrashosds-health.yaml workloads/ec-rados-plugin=isa-k=2-m=1.yaml} | 2 | |||
Failure Reason:
Command failed on smithi191 with status 100: 'sudo apt-get update' |
||||||||||||||
fail | 3645494 | 2019-02-27 17:20:51 | 2019-02-27 17:43:48 | 2019-02-27 18:21:48 | 0:38:00 | 0:28:17 | 0:09:43 | smithi | master | rados/singleton/{all/thrash-eio.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore.yaml rados.yaml} | 2 | |||
Failure Reason:
"2019-02-27 18:02:07.269657 osd.1 osd.1 172.21.15.192:6801/13027 1610 : cluster [WRN] 4 slow requests, 4 included below; oldest blocked for > 32.087976 secs" in cluster log |
||||||||||||||
fail | 3645495 | 2019-02-27 17:20:51 | 2019-02-27 17:44:34 | 2019-02-27 18:00:34 | 0:16:00 | 0:06:41 | 0:09:19 | smithi | master | rados/singleton-bluestore/{all/cephtool.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore-comp.yaml rados.yaml} | 1 | |||
Failure Reason:
Command failed (workunit test cephtool/test.sh) on smithi092 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=34a20fc0d402d777e4edc4b483a93f4d7a97d0d4 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh' |
||||||||||||||
fail | 3645496 | 2019-02-27 17:20:52 | 2019-02-27 17:44:56 | 2019-02-27 18:02:55 | 0:17:59 | 0:06:54 | 0:11:05 | smithi | master | rados/singleton-bluestore/{all/cephtool.yaml msgr-failures/many.yaml msgr/random.yaml objectstore/bluestore.yaml rados.yaml} | 1 | |||
Failure Reason:
Command failed (workunit test cephtool/test.sh) on smithi061 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=34a20fc0d402d777e4edc4b483a93f4d7a97d0d4 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh' |
||||||||||||||
fail | 3645497 | 2019-02-27 17:20:53 | 2019-02-27 17:45:08 | 2019-02-27 17:53:07 | 0:07:59 | smithi | master | ubuntu | 14.04 | rados/thrash-erasure-code-isa/{arch/x86_64.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-mkfs.yaml leveldb.yaml msgr-failures/few.yaml objectstore/bluestore-comp.yaml rados.yaml recovery-overrides/{more-active-recovery.yaml} supported/ubuntu_14.04.yaml thrashers/morepggrow.yaml thrashosds-health.yaml workloads/ec-rados-plugin=isa-k=2-m=1.yaml} | 2 | |||
Failure Reason:
Command failed on smithi165 with status 100: 'sudo apt-get update' |
||||||||||||||
fail | 3645498 | 2019-02-27 17:20:54 | 2019-02-27 17:47:17 | 2019-02-27 18:05:16 | 0:17:59 | 0:07:37 | 0:10:22 | smithi | master | rados/rest/rest_test.yaml | 2 | |||
Failure Reason:
Command failed (workunit test rest/test.py) on smithi102 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=34a20fc0d402d777e4edc4b483a93f4d7a97d0d4 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rest/test.py' |
||||||||||||||
fail | 3645499 | 2019-02-27 17:20:55 | 2019-02-27 17:49:36 | 2019-02-27 18:07:36 | 0:18:00 | 0:06:47 | 0:11:13 | smithi | master | rados/singleton-bluestore/{all/cephtool.yaml msgr-failures/few.yaml msgr/simple.yaml objectstore/bluestore-comp.yaml rados.yaml} | 1 | |||
Failure Reason:
Command failed (workunit test cephtool/test.sh) on smithi067 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=34a20fc0d402d777e4edc4b483a93f4d7a97d0d4 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh' |
||||||||||||||
fail | 3645500 | 2019-02-27 17:20:55 | 2019-02-27 17:50:34 | 2019-02-27 21:08:37 | 3:18:03 | 3:07:40 | 0:10:23 | smithi | master | rados/mgr/{clusters/2-node-mgr.yaml debug/mgr.yaml objectstore/filestore-xfs.yaml tasks/workunits.yaml} | 2 | |||
Failure Reason:
Command failed (workunit test mgr/test_localpool.sh) on smithi137 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=34a20fc0d402d777e4edc4b483a93f4d7a97d0d4 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/mgr/test_localpool.sh' |
||||||||||||||
fail | 3645501 | 2019-02-27 17:20:56 | 2019-02-27 17:50:54 | 2019-02-27 18:10:54 | 0:20:00 | 0:10:43 | 0:09:17 | smithi | master | rados/singleton/{all/osd-recovery.yaml msgr-failures/many.yaml msgr/async.yaml objectstore/bluestore.yaml rados.yaml} | 1 | |||
Failure Reason:
"2019-02-27 18:04:48.284060 osd.1 osd.1 172.21.15.174:6805/14363 1 : cluster [WRN] 5 slow requests, 5 included below; oldest blocked for > 30.397810 secs" in cluster log |
||||||||||||||
fail | 3645502 | 2019-02-27 17:20:57 | 2019-02-27 17:50:54 | 2019-02-27 18:04:54 | 0:14:00 | 0:06:23 | 0:07:37 | smithi | master | rados/singleton-bluestore/{all/cephtool.yaml msgr-failures/many.yaml msgr/async.yaml objectstore/bluestore.yaml rados.yaml} | 1 | |||
Failure Reason:
Command failed (workunit test cephtool/test.sh) on smithi014 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=34a20fc0d402d777e4edc4b483a93f4d7a97d0d4 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh' |
||||||||||||||
pass | 3645503 | 2019-02-27 17:20:58 | 2019-02-27 17:51:39 | 2019-02-27 18:21:39 | 0:30:00 | 0:20:05 | 0:09:55 | smithi | master | rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-mkfs.yaml mon_kv_backend/leveldb.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore-comp.yaml rados.yaml tasks/rados_api_tests.yaml} | 2 | |||
fail | 3645504 | 2019-02-27 17:20:59 | 2019-02-27 17:53:08 | 2019-02-27 18:09:07 | 0:15:59 | 0:06:36 | 0:09:23 | smithi | master | rados/singleton/{all/rest-api.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/filestore-xfs.yaml rados.yaml} | 1 | |||
Failure Reason:
Command failed (workunit test rest/test.py) on smithi101 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=34a20fc0d402d777e4edc4b483a93f4d7a97d0d4 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rest/test.py' |
||||||||||||||
fail | 3645505 | 2019-02-27 17:21:00 | 2019-02-27 17:53:09 | 2019-02-27 18:01:07 | 0:07:58 | smithi | master | ubuntu | 14.04 | rados/thrash-erasure-code-isa/{arch/x86_64.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-mkfs-balancer-crush-compat.yaml leveldb.yaml msgr-failures/few.yaml objectstore/bluestore-bitmap.yaml rados.yaml recovery-overrides/{default.yaml} supported/ubuntu_14.04.yaml thrashers/default.yaml thrashosds-health.yaml workloads/ec-rados-plugin=isa-k=2-m=1.yaml} | 2 | |||
Failure Reason:
Command failed on smithi095 with status 100: 'sudo apt-get update' |
||||||||||||||
fail | 3645506 | 2019-02-27 17:21:01 | 2019-02-27 17:53:08 | 2019-02-27 18:47:08 | 0:54:00 | 0:42:34 | 0:11:26 | smithi | master | rados/singleton/{all/thrash-eio.yaml msgr-failures/few.yaml msgr/simple.yaml objectstore/bluestore-comp.yaml rados.yaml} | 2 | |||
Failure Reason:
Command failed on smithi166 with status 6: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph tell osd.1 injectargs -- --filestore_debug_random_read_err=0.0' |