User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Pass | Fail |
---|---|---|---|---|---|---|---|---|---|---|
kchai | 2018-09-21 04:27:26 | 2018-09-21 04:29:13 | 2018-09-21 06:39:31 | 2:10:18 | rados | master | smithi | 6be4215 | 4 | 3 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
fail | 3049037 | 2018-09-21 04:27:30 | 2018-09-21 04:29:13 | 2018-09-21 04:59:12 | 0:29:59 | 0:18:05 | 0:11:54 | smithi | master | centos | 7.4 | rados/standalone/{supported-random-distro$/{centos_latest.yaml} workloads/scrub.yaml} | 1 | |
Failure Reason:
Command failed (workunit test scrub/osd-scrub-repair.sh) on smithi083 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=master TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/standalone/scrub/osd-scrub-repair.sh' |
||||||||||||||
fail | 3049038 | 2018-09-21 04:27:31 | 2018-09-21 04:29:28 | 2018-09-21 05:23:29 | 0:54:01 | 0:33:47 | 0:20:14 | smithi | master | ubuntu | 16.04 | rados/objectstore/{backends/objectstore.yaml supported-random-distro$/{ubuntu_16.04.yaml}} | 1 | |
Failure Reason:
Command failed on smithi010 with status 134: 'sudo TESTDIR=/home/ubuntu/cephtest bash -c \'mkdir $TESTDIR/archive/ostest && cd $TESTDIR/archive/ostest && ulimit -Sn 16384 && CEPH_ARGS="--no-log-to-stderr --log-file $TESTDIR/archive/ceph_test_objectstore.log --debug-filestore 20 --debug-bluestore 20" ceph_test_objectstore --gtest_filter=-*/3 --gtest_catch_exceptions=0\'' |
||||||||||||||
pass | 3049039 | 2018-09-21 04:27:31 | 2018-09-21 04:29:29 | 2018-09-21 04:53:29 | 0:24:00 | 0:13:33 | 0:10:27 | smithi | master | rhel | 7.5 | rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} mon_kv_backend/rocksdb.yaml msgr-failures/few.yaml msgr/random.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{rhel_latest.yaml} tasks/scrub_test.yaml} | 2 | |
pass | 3049040 | 2018-09-21 04:27:32 | 2018-09-21 04:29:29 | 2018-09-21 06:39:31 | 2:10:02 | 1:47:05 | 0:22:57 | smithi | master | centos | 7.4 | rados/upgrade/luminous-x-singleton/{0-cluster/{openstack.yaml start.yaml} 1-install/luminous.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-workload/{rbd-cls.yaml rbd-import-export.yaml readwrite.yaml snaps-few-objects.yaml} 5-workload/{radosbench.yaml rbd_api.yaml} 6-finish-upgrade.yaml 7-mimic.yaml 8-workload/{rbd-python.yaml rgw-swift.yaml snaps-many-objects.yaml} supported-random-distro$/{centos_latest.yaml} thrashosds-health.yaml} | 3 | |
pass | 3049042 | 2018-09-21 04:27:33 | 2018-09-21 04:29:32 | 2018-09-21 04:47:31 | 0:17:59 | 0:11:44 | 0:06:15 | smithi | master | rhel | 7.5 | rados/singleton/{all/dump-stuck.yaml msgr-failures/few.yaml msgr/simple.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{rhel_latest.yaml}} | 1 | |
pass | 3049044 | 2018-09-21 04:27:33 | 2018-09-21 04:29:33 | 2018-09-21 05:17:33 | 0:48:00 | 0:23:19 | 0:24:41 | smithi | master | centos | 7.4 | rados/thrash-erasure-code/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/fast.yaml leveldb.yaml msgr-failures/few.yaml objectstore/filestore-xfs.yaml rados.yaml recovery-overrides/{more-active-recovery.yaml} supported-random-distro$/{centos_latest.yaml} thrashers/morepggrow.yaml thrashosds-health.yaml workloads/ec-small-objects-fast-read.yaml} | 2 | |
fail | 3049046 | 2018-09-21 04:27:34 | 2018-09-21 04:29:37 | 2018-09-21 05:01:37 | 0:32:00 | 0:16:55 | 0:15:05 | smithi | master | rhel | 7.5 | rados/standalone/{supported-random-distro$/{rhel_latest.yaml} workloads/osd.yaml} | 1 | |
Failure Reason:
Command failed (workunit test osd/osd-backfill-stats.sh) on smithi191 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=master TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/standalone/osd/osd-backfill-stats.sh' |