User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Pass | Fail |
---|---|---|---|---|---|---|---|---|---|---|
yuriw | 2020-05-04 23:20:29 | 2020-05-04 23:28:27 | 2020-05-05 02:42:53 | 3:14:26 | rados | wip-yuri5-testing-2020-05-04-1554-nautilus | smithi | 785d4d2 | 3 | 5 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
fail | 5023266 | 2020-05-04 23:20:37 | 2020-05-04 23:28:17 | 2020-05-05 00:24:17 | 0:56:00 | 0:32:14 | 0:23:46 | smithi | py2 | centos | 7.5 | rados/mgr/{clusters/{2-node-mgr.yaml} debug/mgr.yaml objectstore/bluestore-bitmap.yaml supported-random-distro$/{centos_latest.yaml} tasks/module_selftest.yaml} | 2 | |
Failure Reason:
"2020-05-05 00:08:58.074887 mds.b (mds.0) 1 : cluster [WRN] evicting unresponsive client smithi075:y (4660), after 302.719 seconds" in cluster log |
||||||||||||||
fail | 5023267 | 2020-05-04 23:20:38 | 2020-05-04 23:28:17 | 2020-05-05 00:44:18 | 1:16:01 | 0:33:07 | 0:42:54 | smithi | py2 | ubuntu | 16.04 | rados/upgrade/mimic-x-singleton/{0-cluster/{openstack.yaml start.yaml} 1-install/mimic.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-workload/{rbd-cls.yaml rbd-import-export.yaml readwrite.yaml snaps-few-objects.yaml} 5-workload/{radosbench.yaml rbd_api.yaml} 6-finish-upgrade.yaml 7-nautilus.yaml 8-workload/{rbd-python.yaml rgw-swift.yaml snaps-many-objects.yaml} supported-random-distro$/{ubuntu_16.04.yaml} thrashosds-health.yaml} | 4 | |
Failure Reason:
'wait_until_healthy' reached maximum tries (150) after waiting for 900 seconds |
||||||||||||||
pass | 5023268 | 2020-05-04 23:20:39 | 2020-05-04 23:28:27 | 2020-05-04 23:52:26 | 0:23:59 | 0:13:32 | 0:10:27 | smithi | py2 | centos | 7.5 | rados/singleton-nomsgr/{all/balancer.yaml rados.yaml supported-random-distro$/{centos_latest.yaml}} | 1 | |
fail | 5023269 | 2020-05-04 23:20:40 | 2020-05-04 23:28:39 | 2020-05-04 23:56:39 | 0:28:00 | 0:19:23 | 0:08:37 | smithi | py2 | rhel | 7.5 | rados/singleton/{all/test_envlibrados_for_rocksdb.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{rhel_latest.yaml}} | 1 | |
Failure Reason:
Command failed (workunit test rados/test_envlibrados_for_rocksdb.sh) on smithi089 with status 2: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=785d4d2d77bf8765f0bcd5e2b7bed4f857d0eef9 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test_envlibrados_for_rocksdb.sh' |
||||||||||||||
pass | 5023270 | 2020-05-04 23:20:41 | 2020-05-04 23:30:49 | 2020-05-05 02:42:53 | 3:12:04 | 3:04:15 | 0:07:49 | smithi | py2 | rhel | 7.5 | rados/standalone/{supported-random-distro$/{rhel_latest.yaml} workloads/osd.yaml} | 1 | |
pass | 5023271 | 2020-05-04 23:20:42 | 2020-05-04 23:30:49 | 2020-05-05 00:58:50 | 1:28:01 | 0:46:13 | 0:41:48 | smithi | py2 | centos | 7.6 | rados/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-install/hammer.yaml backoff/normal.yaml ceph.yaml clusters/{openstack.yaml three-plus-one.yaml} d-balancer/crush-compat.yaml distro$/{centos_latest.yaml} msgr-failures/fastclose.yaml msgr/simple.yaml rados.yaml thrashers/careful.yaml thrashosds-health.yaml workloads/cache-snaps.yaml} | 4 | |
fail | 5023272 | 2020-05-04 23:20:43 | 2020-05-04 23:32:15 | 2020-05-05 00:12:15 | 0:40:00 | 0:29:02 | 0:10:58 | smithi | py2 | ubuntu | 18.04 | rados/mgr/{clusters/{2-node-mgr.yaml} debug/mgr.yaml objectstore/bluestore-comp.yaml supported-random-distro$/{ubuntu_latest.yaml} tasks/module_selftest.yaml} | 2 | |
Failure Reason:
"2020-05-04 23:58:21.716448 mds.a (mds.0) 1 : cluster [WRN] evicting unresponsive client smithi203:y (14644), after 301.491 seconds" in cluster log |
||||||||||||||
fail | 5023273 | 2020-05-04 23:20:44 | 2020-05-04 23:32:30 | 2020-05-05 00:28:31 | 0:56:01 | 0:43:01 | 0:13:00 | smithi | py2 | centos | 7.5 | rados/objectstore/{backends/objectstore.yaml supported-random-distro$/{centos_latest.yaml}} | 1 | |
Failure Reason:
Command failed on smithi163 with status 134: 'sudo TESTDIR=/home/ubuntu/cephtest bash -c \'mkdir $TESTDIR/archive/ostest && cd $TESTDIR/archive/ostest && ulimit -Sn 16384 && CEPH_ARGS="--no-log-to-stderr --log-file $TESTDIR/archive/ceph_test_objectstore.log --debug-filestore 20 --debug-bluestore 20" ceph_test_objectstore --gtest_filter=-*/3 --gtest_catch_exceptions=0\'' |