User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Pass | Fail |
---|---|---|---|---|---|---|---|---|---|---|
smithfarm | 2018-03-28 20:31:45 | 2018-03-28 20:43:39 | 2018-03-28 22:09:40 | 1:26:01 | rados | wip-jewel-backports | smithi | ff281b3 | 4 | 7 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
fail | 2331780 | 2018-03-28 20:31:48 | 2018-03-28 20:43:39 | 2018-03-28 22:09:40 | 1:26:01 | 1:12:41 | 0:13:20 | smithi | master | centos | rados/upgrade/{hammer-x-singleton/{0-cluster/{openstack.yaml start.yaml} 1-hammer-install/hammer.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-mon/mona.yaml 5-workload/{rbd-cls.yaml rbd-import-export.yaml readwrite.yaml snaps-few-objects.yaml} 6-next-mon/monb.yaml 7-workload/{radosbench.yaml rbd_api.yaml} 8-next-mon/monc.yaml 9-workload/{ec-rados-plugin=jerasure-k=3-m=1.yaml rbd-python.yaml rgw-swift.yaml snaps-many-objects.yaml test_cache-pool-snaps.yaml}} rados.yaml} | 3 | ||
Failure Reason:
Command crashed: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --max-ops 4000 --objects 500 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op snap_remove 50 --op snap_create 50 --op rollback 50 --op read 100 --op write 100 --op delete 50 --pool unique_pool_8' |
||||||||||||||
pass | 2331781 | 2018-03-28 20:31:49 | 2018-03-28 20:43:39 | 2018-03-28 21:15:38 | 0:31:59 | 0:25:05 | 0:06:54 | smithi | master | rados/thrash-erasure-code/{clusters/{fixed-2.yaml openstack.yaml} msgr-failures/fastclose.yaml objectstore/filestore-xfs.yaml rados.yaml thrashers/default.yaml workloads/ec-rados-plugin=jerasure-k=3-m=1.yaml} | 2 | |||
pass | 2331782 | 2018-03-28 20:31:50 | 2018-03-28 20:43:45 | 2018-03-28 21:09:44 | 0:25:59 | 0:18:28 | 0:07:31 | smithi | master | rados/thrash-erasure-code/{clusters/{fixed-2.yaml openstack.yaml} msgr-failures/osd-delay.yaml objectstore/filestore-xfs.yaml rados.yaml thrashers/mapgap.yaml workloads/ec-small-objects-fast-read.yaml} | 2 | |||
fail | 2331783 | 2018-03-28 20:31:51 | 2018-03-28 20:43:46 | 2018-03-28 21:11:46 | 0:28:00 | 0:17:22 | 0:10:38 | smithi | master | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml clusters/{fixed-2.yaml openstack.yaml} hobj-sort.yaml msgr-failures/fastclose.yaml msgr/async.yaml objectstore/filestore-xfs.yaml rados.yaml thrashers/mapgap.yaml workloads/rados_api_tests.yaml} | 2 | |||
Failure Reason:
Command failed (workunit test rados/test.sh) on smithi170 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=wip-jewel-backports TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test.sh' |
||||||||||||||
fail | 2331784 | 2018-03-28 20:31:51 | 2018-03-28 20:43:46 | 2018-03-28 21:15:46 | 0:32:00 | 0:21:09 | 0:10:51 | smithi | master | rados/thrash/{0-size-min-size-overrides/2-size-1-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml clusters/{fixed-2.yaml openstack.yaml} hobj-sort.yaml msgr-failures/osd-delay.yaml msgr/simple.yaml objectstore/filestore-xfs.yaml rados.yaml thrashers/morepggrow.yaml workloads/snaps-few-objects.yaml} | 2 | |||
Failure Reason:
Command crashed: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --max-ops 4000 --objects 50 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op snap_remove 50 --op snap_create 50 --op rollback 50 --op read 100 --op copy_from 50 --op write 50 --op write_excl 50 --op delete 50 --pool unique_pool_0' |
||||||||||||||
fail | 2331785 | 2018-03-28 20:31:52 | 2018-03-28 20:43:47 | 2018-03-28 21:07:47 | 0:24:00 | 0:14:51 | 0:09:09 | smithi | master | rados/basic/{clusters/{fixed-2.yaml openstack.yaml} msgr-failures/few.yaml msgr/random.yaml objectstore/filestore-xfs.yaml rados.yaml tasks/rados_api_tests.yaml} | 2 | |||
Failure Reason:
Command failed (workunit test rados/test.sh) on smithi199 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=wip-jewel-backports TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test.sh' |
||||||||||||||
fail | 2331786 | 2018-03-28 20:31:53 | 2018-03-28 20:43:49 | 2018-03-28 21:13:48 | 0:29:59 | 0:18:42 | 0:11:17 | smithi | master | rados/thrash/{0-size-min-size-overrides/2-size-1-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml clusters/{fixed-2.yaml openstack.yaml} hobj-sort.yaml msgr-failures/osd-delay.yaml msgr/simple.yaml objectstore/filestore-xfs.yaml rados.yaml thrashers/default.yaml workloads/rados_api_tests.yaml} | 2 | |||
Failure Reason:
Command failed (workunit test rados/test.sh) on smithi041 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=wip-jewel-backports TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test.sh' |
||||||||||||||
fail | 2331787 | 2018-03-28 20:31:54 | 2018-03-28 20:45:38 | 2018-03-28 21:15:38 | 0:30:00 | 0:21:14 | 0:08:46 | smithi | master | rados/monthrash/{ceph/ceph.yaml clusters/3-mons.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/filestore-xfs.yaml rados.yaml thrashers/one.yaml workloads/rados_api_tests.yaml} | 2 | |||
Failure Reason:
Command failed (workunit test rados/test.sh) on smithi204 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=wip-jewel-backports TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test.sh' |
||||||||||||||
pass | 2331788 | 2018-03-28 20:31:54 | 2018-03-28 20:45:38 | 2018-03-28 21:21:38 | 0:36:00 | 0:25:15 | 0:10:45 | smithi | master | ubuntu | 14.04 | rados/thrash-erasure-code-isa/{arch/x86_64.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/fastclose.yaml objectstore/filestore-xfs.yaml rados.yaml supported/ubuntu_14.04.yaml thrashers/morepggrow.yaml workloads/ec-rados-plugin=isa-k=2-m=1.yaml} | 2 | |
pass | 2331789 | 2018-03-28 20:31:55 | 2018-03-28 20:45:41 | 2018-03-28 21:21:41 | 0:36:00 | 0:29:17 | 0:06:43 | smithi | master | rados/objectstore/objectstore.yaml | 1 | |||
fail | 2331790 | 2018-03-28 20:31:56 | 2018-03-28 20:46:06 | 2018-03-28 21:10:06 | 0:24:00 | 0:14:46 | 0:09:14 | smithi | master | rados/verify/{1thrash/none.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/few.yaml msgr/simple.yaml objectstore/filestore-xfs.yaml rados.yaml tasks/rados_api_tests.yaml validater/lockdep.yaml} | 2 | |||
Failure Reason:
Command failed (workunit test rados/test.sh) on smithi049 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=wip-jewel-backports TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test.sh' |