User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Pass | Fail |
---|---|---|---|---|---|---|---|---|---|---|
sage | 2018-02-12 22:34:03 | 2018-02-12 22:36:22 | 2018-02-13 01:57:45 | 3:21:23 | rados | wip-sage3-testing-2018-02-12-1424 | smithi | b4910f7 | 2 | 10 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
fail | 2184799 | 2018-02-12 22:35:46 | 2018-02-12 22:36:19 | 2018-02-13 01:55:45 | 3:19:26 | 3:08:27 | 0:10:59 | smithi | master | rados/standalone/crush.yaml | 1 | |||
Failure Reason:
Command failed (workunit test crush/crush-choose-args.sh) on smithi184 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=wip-sage3-testing-2018-02-12-1424 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/standalone/crush/crush-choose-args.sh' |
||||||||||||||
fail | 2184800 | 2018-02-12 22:35:49 | 2018-02-12 22:36:21 | 2018-02-12 23:00:24 | 0:24:03 | 0:11:20 | 0:12:43 | smithi | master | rados/upgrade/luminous-x-singleton/{0-cluster/{openstack.yaml start.yaml} 1-install/luminous.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-workload/{rbd-cls.yaml rbd-import-export.yaml readwrite.yaml snaps-few-objects.yaml} 5-workload/{radosbench.yaml rbd_api.yaml} 6-finish-upgrade.yaml 7-mimic.yaml 8-workload/{rbd-python.yaml rgw-swift.yaml snaps-many-objects.yaml} thrashosds-health.yaml} | 3 | |||
Failure Reason:
Command failed on smithi037 with status 1: "sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph -- tell 'mon.*' injectargs --mon_health_to_clog=false" |
||||||||||||||
fail | 2184801 | 2018-02-12 22:35:51 | 2018-02-12 22:36:22 | 2018-02-13 01:53:42 | 3:17:20 | 3:07:58 | 0:09:22 | smithi | master | rados/standalone/erasure-code.yaml | 1 | |||
Failure Reason:
Command failed (workunit test erasure-code/test-erasure-code-plugins.sh) on smithi172 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=wip-sage3-testing-2018-02-12-1424 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/standalone/erasure-code/test-erasure-code-plugins.sh' |
||||||||||||||
fail | 2184802 | 2018-02-12 22:35:53 | 2018-02-12 22:36:21 | 2018-02-13 01:53:44 | 3:17:23 | 3:07:41 | 0:09:42 | smithi | master | rados/standalone/misc.yaml | 1 | |||
Failure Reason:
Command failed (workunit test misc/rados-striper.sh) on smithi045 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=wip-sage3-testing-2018-02-12-1424 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/standalone/misc/rados-striper.sh' |
||||||||||||||
fail | 2184803 | 2018-02-12 22:35:55 | 2018-02-12 22:36:22 | 2018-02-12 22:56:17 | 0:19:55 | 0:09:47 | 0:10:08 | smithi | master | rados/standalone/mon.yaml | 1 | |||
Failure Reason:
Command failed (workunit test mon/misc.sh) on smithi186 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=wip-sage3-testing-2018-02-12-1424 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/standalone/mon/misc.sh' |
||||||||||||||
pass | 2184804 | 2018-02-12 22:35:57 | 2018-02-12 22:36:22 | 2018-02-12 23:16:25 | 0:40:03 | 0:28:43 | 0:11:20 | smithi | master | rados/monthrash/{ceph.yaml clusters/9-mons.yaml mon_kv_backend/leveldb.yaml msgr-failures/mon-delay.yaml msgr/random.yaml objectstore/bluestore.yaml rados.yaml thrashers/many.yaml workloads/rados_api_tests.yaml} | 2 | |||
fail | 2184805 | 2018-02-12 22:35:59 | 2018-02-12 22:36:24 | 2018-02-13 01:57:45 | 3:21:21 | 3:09:04 | 0:12:17 | smithi | master | rados/standalone/osd.yaml | 1 | |||
Failure Reason:
Command failed (workunit test osd/osd-backfill-stats.sh) on smithi063 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=wip-sage3-testing-2018-02-12-1424 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/standalone/osd/osd-backfill-stats.sh' |
||||||||||||||
fail | 2184806 | 2018-02-12 22:36:01 | 2018-02-12 22:36:17 | 2018-02-13 01:51:45 | 3:15:28 | 3:07:27 | 0:08:01 | smithi | master | rados/standalone/scrub.yaml | 1 | |||
Failure Reason:
Command failed (workunit test scrub/osd-recovery-scrub.sh) on smithi193 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=wip-sage3-testing-2018-02-12-1424 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/standalone/scrub/osd-recovery-scrub.sh' |
||||||||||||||
pass | 2184807 | 2018-02-12 22:36:02 | 2018-02-12 22:36:22 | 2018-02-12 23:00:21 | 0:23:59 | 0:16:37 | 0:07:22 | smithi | master | centos | rados/singleton-nomsgr/{all/valgrind-leaks.yaml rados.yaml} | 1 | ||
fail | 2184808 | 2018-02-12 22:36:04 | 2018-02-12 22:36:24 | 2018-02-12 23:22:57 | 0:46:33 | 0:35:26 | 0:11:07 | smithi | master | centos | rados/verify/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-thrash/none.yaml mon_kv_backend/rocksdb.yaml msgr-failures/few.yaml msgr/random.yaml objectstore/bluestore-bitmap.yaml rados.yaml tasks/rados_cls_all.yaml validater/valgrind.yaml} | 2 | ||
Failure Reason:
"2018-02-12 23:05:37.325146 mon.a mon.0 172.21.15.74:6789/0 105 : cluster [WRN] Manager daemon x is unresponsive. No standby daemons available." in cluster log |
||||||||||||||
fail | 2184809 | 2018-02-12 22:36:06 | 2018-02-12 22:36:25 | 2018-02-12 23:22:57 | 0:46:32 | 0:35:05 | 0:11:27 | smithi | master | centos | rados/verify/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-thrash/none.yaml mon_kv_backend/rocksdb.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore.yaml rados.yaml tasks/rados_api_tests.yaml validater/valgrind.yaml} | 2 | ||
Failure Reason:
"2018-02-12 23:05:28.797268 mon.b mon.0 172.21.15.155:6789/0 101 : cluster [WRN] Manager daemon x is unresponsive. No standby daemons available." in cluster log |
||||||||||||||
fail | 2184810 | 2018-02-12 22:36:08 | 2018-02-12 22:36:22 | 2018-02-12 22:58:17 | 0:21:55 | 0:11:47 | 0:10:08 | smithi | master | rados/singleton/{all/rebuild-mondb.yaml msgr-failures/many.yaml msgr/simple.yaml objectstore/filestore-xfs.yaml rados.yaml} | 1 | |||
Failure Reason:
Command failed on smithi113 with status 234: 'sudo -u ceph CEPH_ARGS=--no-mon-config ceph-monstore-tool /var/lib/ceph/mon/ceph-a rebuild -- --keyring /etc/ceph/ceph.keyring' |