Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
pass 3474915 2019-01-17 16:15:37 2019-01-17 16:17:13 2019-01-17 16:39:12 0:21:59 0:09:30 0:12:29 smithi master rados/monthrash/{ceph.yaml clusters/3-mons.yaml d-require-luminous/at-end.yaml mon_kv_backend/leveldb.yaml msgr-failures/few.yaml msgr/simple.yaml objectstore/bluestore-bitmap.yaml rados.yaml thrashers/sync-many.yaml workloads/rados_5925.yaml} 2
pass 3474916 2019-01-17 16:15:38 2019-01-17 16:17:13 2019-01-17 16:59:13 0:42:00 0:27:11 0:14:49 smithi master rados/singleton/{all/thrash-eio.yaml msgr-failures/many.yaml msgr/random.yaml objectstore/bluestore-bitmap.yaml rados.yaml} 2
fail 3474917 2019-01-17 16:15:38 2019-01-17 16:17:13 2019-01-17 19:51:16 3:34:03 3:07:12 0:26:51 smithi master rados/mgr/{clusters/2-node-mgr.yaml debug/mgr.yaml objectstore/filestore-xfs.yaml tasks/workunits.yaml} 2
Failure Reason:

Command failed (workunit test mgr/test_localpool.sh) on smithi018 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=8dd4ca32137d2fe1cd24111a83b7d4ad52696baa TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/mgr/test_localpool.sh'

pass 3474918 2019-01-17 16:15:39 2019-01-17 16:17:13 2019-01-17 17:19:13 1:02:00 0:54:17 0:07:43 smithi master rados/standalone/osd.yaml 1
pass 3474919 2019-01-17 16:15:40 2019-01-17 16:17:13 2019-01-17 16:47:12 0:29:59 0:17:18 0:12:41 smithi master rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-mkfs-balancer-crush-compat.yaml msgr-failures/osd-delay.yaml msgr/simple.yaml objectstore/bluestore-comp.yaml rados.yaml rocksdb.yaml thrashers/none.yaml thrashosds-health.yaml workloads/rados_api_tests.yaml} 2
fail 3474920 2019-01-17 16:15:41 2019-01-17 16:17:13 2019-01-17 16:43:13 0:26:00 0:15:15 0:10:45 smithi master rados/singleton/{all/osd-recovery.yaml msgr-failures/few.yaml msgr/random.yaml objectstore/bluestore-bitmap.yaml rados.yaml} 1
Failure Reason:

"2019-01-17 16:32:44.194301 osd.1 osd.1 172.21.15.26:6813/14859 1 : cluster [WRN] 7 slow requests, 5 included below; oldest blocked for > 30.410586 secs" in cluster log

pass 3474921 2019-01-17 16:15:41 2019-01-17 16:19:01 2019-01-17 17:15:01 0:56:00 0:44:50 0:11:10 smithi master rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-end.yaml msgr-failures/osd-delay.yaml msgr/simple.yaml objectstore/bluestore-bitmap.yaml rados.yaml rocksdb.yaml thrashers/default.yaml thrashosds-health.yaml workloads/radosbench.yaml} 2