Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
pass 3609085 2019-02-18 23:59:38 2019-02-19 00:01:23 2019-02-19 00:59:22 0:57:59 0:46:44 0:11:15 smithi master ubuntu 18.04 rados:monthrash/{ceph.yaml clusters/9-mons.yaml mon_kv_backend/rocksdb.yaml msgr-failures/mon-delay.yaml msgr/simple.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} thrashers/sync.yaml workloads/rados_mon_osdmap_prune.yaml} 2
pass 3609087 2019-02-18 23:59:39 2019-02-19 00:01:23 2019-02-19 00:37:23 0:36:00 0:24:54 0:11:06 smithi master ubuntu 18.04 rados:monthrash/{ceph.yaml clusters/3-mons.yaml mon_kv_backend/leveldb.yaml msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} thrashers/force-sync-many.yaml workloads/rados_mon_workunits.yaml} 2
fail 3609089 2019-02-18 23:59:40 2019-02-19 00:01:23 2019-02-19 00:29:22 0:27:59 0:16:09 0:11:50 smithi master centos 7.5 rados:monthrash/{ceph.yaml clusters/9-mons.yaml mon_kv_backend/rocksdb.yaml msgr-failures/mon-delay.yaml msgr/async-v2only.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{centos_latest.yaml} thrashers/many.yaml workloads/snaps-few-objects.yaml} 2
Failure Reason:

"2019-02-19 00:20:39.363997 mon.a (mon.0) 92 : cluster [WRN] Health check failed: 4 osds down (OSD_DOWN)" in cluster log

fail 3609091 2019-02-18 23:59:40 2019-02-19 00:01:24 2019-02-19 00:17:22 0:15:58 0:06:05 0:09:53 smithi master ubuntu 16.04 rados:monthrash/{ceph.yaml clusters/3-mons.yaml mon_kv_backend/leveldb.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{ubuntu_16.04.yaml} thrashers/one.yaml workloads/pool-create-delete.yaml} 2
Failure Reason:

Command crashed: 'sudo ceph --cluster ceph osd crush tunables default'

pass 3609093 2019-02-18 23:59:41 2019-02-19 00:01:23 2019-02-19 00:25:22 0:23:59 0:16:26 0:07:33 smithi master rhel 7.5 rados:monthrash/{ceph.yaml clusters/9-mons.yaml mon_kv_backend/rocksdb.yaml msgr-failures/mon-delay.yaml msgr/random.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{rhel_latest.yaml} thrashers/sync-many.yaml workloads/rados_5925.yaml} 2
pass 3609095 2019-02-18 23:59:42 2019-02-19 00:03:13 2019-02-19 00:35:13 0:32:00 0:21:39 0:10:21 smithi master ubuntu 16.04 rados:monthrash/{ceph.yaml clusters/3-mons.yaml mon_kv_backend/leveldb.yaml msgr-failures/few.yaml msgr/simple.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{ubuntu_16.04.yaml} thrashers/sync.yaml workloads/rados_api_tests.yaml} 2
pass 3609097 2019-02-18 23:59:43 2019-02-19 00:03:13 2019-02-19 01:03:14 1:00:01 0:53:16 0:06:45 smithi master rhel 7.5 rados:monthrash/{ceph.yaml clusters/9-mons.yaml mon_kv_backend/rocksdb.yaml msgr-failures/mon-delay.yaml msgr/async-v1only.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{rhel_latest.yaml} thrashers/force-sync-many.yaml workloads/rados_mon_osdmap_prune.yaml} 2
fail 3609099 2019-02-18 23:59:43 2019-02-19 00:03:15 2019-02-19 00:27:14 0:23:59 0:11:44 0:12:15 smithi master centos 7.5 rados:monthrash/{ceph.yaml clusters/3-mons.yaml mon_kv_backend/leveldb.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{centos_latest.yaml} thrashers/many.yaml workloads/rados_mon_workunits.yaml} 2
Failure Reason:

"2019-02-19 00:21:30.433246 mon.b (mon.0) 163 : cluster [WRN] Health check failed: 3 osds down (OSD_DOWN)" in cluster log

fail 3609101 2019-02-18 23:59:44 2019-02-19 00:03:15 2019-02-19 00:29:15 0:26:00 0:18:52 0:07:08 smithi master rhel 7.5 rados:monthrash/{ceph.yaml clusters/9-mons.yaml mon_kv_backend/rocksdb.yaml msgr-failures/mon-delay.yaml msgr/async.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{rhel_latest.yaml} thrashers/one.yaml workloads/snaps-few-objects.yaml} 2
Failure Reason:

"2019-02-19 00:20:36.461851 mon.f (mon.0) 93 : cluster [WRN] Health check failed: 4 osds down (OSD_DOWN)" in cluster log