User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Fail |
---|---|---|---|---|---|---|---|---|---|
smithfarm | 2017-09-01 16:07:40 | 2017-09-01 17:42:17 | 2017-09-02 00:26:55 | 6:44:38 | rados | wip-20460-jewel | smithi | d2eea3f | 10 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
fail | 1586896 | 2017-09-01 16:07:47 | 2017-09-01 17:42:17 | 2017-09-01 20:52:20 | 3:10:03 | 3:05:50 | 0:04:13 | smithi | master | rados/objectstore/alloc-hint.yaml | 1 | |||
Failure Reason:
Command failed (workunit test rados/test_alloc_hint.sh) on smithi183 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=wip-20460-jewel TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test_alloc_hint.sh' |
||||||||||||||
fail | 1586897 | 2017-09-01 16:07:48 | 2017-09-01 17:43:49 | 2017-09-01 18:27:48 | 0:43:59 | 0:37:27 | 0:06:32 | smithi | master | ubuntu | 14.04 | rados/singleton-nomsgr/{all/11429.yaml rados.yaml} | 1 | |
Failure Reason:
Found coredumps on ubuntu@smithi060.front.sepia.ceph.com |
||||||||||||||
fail | 1586898 | 2017-09-01 16:07:48 | 2017-09-01 17:43:48 | 2017-09-01 17:55:48 | 0:12:00 | 0:07:06 | 0:04:54 | smithi | master | centos | 7.3 | rados/thrash-erasure-code-isa/{arch/x86_64.yaml clusters/{fixed-2.yaml openstack.yaml} fs/btrfs.yaml msgr-failures/fastclose.yaml rados.yaml supported/centos_7.3.yaml thrashers/default.yaml workloads/ec-rados-plugin=isa-k=2-m=1.yaml} | 2 | |
Failure Reason:
Command crashed: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --ec-pool --max-ops 4000 --objects 50 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op snap_remove 50 --op snap_create 50 --op rollback 50 --op setattr 25 --op read 100 --op copy_from 50 --op write 0 --op rmattr 25 --op append 100 --op delete 50 --pool unique_pool_0' |
||||||||||||||
fail | 1586899 | 2017-09-01 16:07:49 | 2017-09-01 17:44:03 | 2017-09-01 18:02:02 | 0:17:59 | 0:09:06 | 0:08:53 | smithi | master | rados/thrash-erasure-code-shec/{clusters/{fixed-4.yaml openstack.yaml} fs/xfs.yaml msgr-failures/fastclose.yaml rados.yaml thrashers/default.yaml workloads/ec-rados-plugin=shec-k=4-m=3-c=2.yaml} | 4 | |||
Failure Reason:
Command crashed: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --ec-pool --max-ops 400 --objects 50 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op snap_remove 50 --op snap_create 50 --op rollback 50 --op setattr 25 --op read 100 --op copy_from 50 --op write 0 --op rmattr 25 --op append 100 --op delete 50 --pool unique_pool_0' |
||||||||||||||
fail | 1586900 | 2017-09-01 16:07:50 | 2017-09-01 17:44:32 | 2017-09-01 18:12:31 | 0:27:59 | 0:10:17 | 0:17:42 | smithi | master | ubuntu | 14.04 | rados/thrash-erasure-code-isa/{arch/x86_64.yaml clusters/{fixed-2.yaml openstack.yaml} fs/xfs.yaml msgr-failures/few.yaml rados.yaml supported/ubuntu_14.04.yaml thrashers/mapgap.yaml workloads/ec-rados-plugin=isa-k=2-m=1.yaml} | 2 | |
Failure Reason:
Command crashed: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --ec-pool --max-ops 4000 --objects 50 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op snap_remove 50 --op snap_create 50 --op rollback 50 --op setattr 25 --op read 100 --op copy_from 50 --op write 0 --op rmattr 25 --op append 100 --op delete 50 --pool unique_pool_0' |
||||||||||||||
fail | 1586901 | 2017-09-01 16:07:50 | 2017-09-01 17:44:47 | 2017-09-02 00:26:55 | 6:42:08 | 6:19:40 | 0:22:28 | smithi | master | rados/verify/{1thrash/none.yaml clusters/{fixed-2.yaml openstack.yaml} fs/btrfs.yaml msgr-failures/few.yaml msgr/random.yaml rados.yaml tasks/rados_api_tests.yaml validater/valgrind.yaml} | 2 | |||
Failure Reason:
Command failed (workunit test rados/test.sh) on smithi147 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=wip-20460-jewel TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test.sh' |
||||||||||||||
fail | 1586902 | 2017-09-01 16:07:51 | 2017-09-01 17:45:48 | 2017-09-01 18:01:47 | 0:15:59 | 0:07:26 | 0:08:33 | smithi | master | centos | 7.3 | rados/thrash-erasure-code-isa/{arch/x86_64.yaml clusters/{fixed-2.yaml openstack.yaml} fs/btrfs.yaml msgr-failures/osd-delay.yaml rados.yaml supported/centos_7.3.yaml thrashers/morepggrow.yaml workloads/ec-rados-plugin=isa-k=2-m=1.yaml} | 2 | |
Failure Reason:
Command crashed: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --ec-pool --max-ops 4000 --objects 50 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op snap_remove 50 --op snap_create 50 --op rollback 50 --op setattr 25 --op read 100 --op copy_from 50 --op write 0 --op rmattr 25 --op append 100 --op delete 50 --pool unique_pool_0' |
||||||||||||||
fail | 1586903 | 2017-09-01 16:07:51 | 2017-09-01 17:45:48 | 2017-09-01 17:57:47 | 0:11:59 | 0:07:22 | 0:04:37 | smithi | master | rados/verify/{1thrash/default.yaml clusters/{fixed-2.yaml openstack.yaml} fs/btrfs.yaml msgr-failures/few.yaml msgr/simple.yaml rados.yaml tasks/rados_cls_all.yaml validater/lockdep.yaml} | 2 | |||
Failure Reason:
Command failed (workunit test cls/test_cls_hello.sh) on smithi166 with status 139: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=wip-20460-jewel TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_hello.sh' |
||||||||||||||
fail | 1586904 | 2017-09-01 16:07:52 | 2017-09-01 17:45:48 | 2017-09-01 18:05:47 | 0:19:59 | 0:07:10 | 0:12:49 | smithi | master | ubuntu | 14.04 | rados/thrash-erasure-code-isa/{arch/x86_64.yaml clusters/{fixed-2.yaml openstack.yaml} fs/xfs.yaml msgr-failures/fastclose.yaml rados.yaml supported/ubuntu_14.04.yaml thrashers/pggrow.yaml workloads/ec-rados-plugin=isa-k=2-m=1.yaml} | 2 | |
Failure Reason:
Command crashed: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --ec-pool --max-ops 4000 --objects 50 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op snap_remove 50 --op snap_create 50 --op rollback 50 --op setattr 25 --op read 100 --op copy_from 50 --op write 0 --op rmattr 25 --op append 100 --op delete 50 --pool unique_pool_0' |
||||||||||||||
fail | 1586905 | 2017-09-01 16:07:53 | 2017-09-01 17:46:03 | 2017-09-01 18:44:03 | 0:58:00 | 0:49:22 | 0:08:38 | smithi | master | ubuntu | 14.04 | rados/singleton-nomsgr/{all/16113.yaml rados.yaml} | 1 | |
Failure Reason:
Found coredumps on ubuntu@smithi046.front.sepia.ceph.com |