Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
pass 3605215 2019-02-18 03:09:28 2019-02-18 03:25:00 2019-02-18 03:45:00 0:20:00 0:10:55 0:09:05 smithi master ubuntu 16.04 rados/perf/{ceph.yaml objectstore/bluestore-stupid.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{ubuntu_16.04.yaml} workloads/radosbench_4K_rand_read.yaml} 1
fail 3605217 2019-02-18 03:09:29 2019-02-18 03:26:23 2019-02-18 05:18:24 1:52:01 1:40:08 0:11:53 smithi master ubuntu 16.04 rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} mon_kv_backend/leveldb.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{ubuntu_16.04.yaml} tasks/rados_python.yaml} 2
Failure Reason:

Command crashed: 'sudo ceph --cluster ceph osd crush tunables default'

pass 3605219 2019-02-18 03:09:29 2019-02-18 03:26:22 2019-02-18 07:50:26 4:24:04 4:12:06 0:11:58 smithi master rhel 7.5 rados/objectstore/{backends/objectstore.yaml supported-random-distro$/{rhel_latest.yaml}} 1
dead 3605221 2019-02-18 03:09:30 2019-02-18 03:28:28 2019-02-18 15:30:57 12:02:29 smithi master ubuntu 16.04 rados/thrash-erasure-code-overwrites/{bluestore-bitmap.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/fast.yaml leveldb.yaml msgr-failures/fastclose.yaml rados.yaml recovery-overrides/{default.yaml} supported-random-distro$/{ubuntu_16.04.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/ec-pool-snaps-few-objects-overwrites.yaml} 2
fail 3605223 2019-02-18 03:09:31 2019-02-18 03:28:31 2019-02-18 04:08:31 0:40:00 0:07:41 0:32:19 smithi master ubuntu 18.04 rados/thrash-erasure-code-shec/{ceph.yaml clusters/{fixed-4.yaml openstack.yaml} leveldb.yaml msgr-failures/fastclose.yaml objectstore/bluestore-bitmap.yaml rados.yaml recovery-overrides/{default.yaml} supported-random-distro$/{ubuntu_latest.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/ec-rados-plugin=shec-k=4-m=3-c=2.yaml} 4
Failure Reason:

Command crashed: 'sudo ceph --cluster ceph osd crush tunables default'

fail 3605225 2019-02-18 03:09:32 2019-02-18 03:30:29 2019-02-18 04:38:29 1:08:00 0:56:28 0:11:32 smithi master rados/verify/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-thrash/none.yaml mon_kv_backend/rocksdb.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-bitmap.yaml rados.yaml tasks/mon_recovery.yaml validater/lockdep.yaml} 2
Failure Reason:

Command failed on smithi075 with status 1: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph --cluster ceph osd dump --format=json'

pass 3605227 2019-02-18 03:09:33 2019-02-18 03:30:29 2019-02-18 04:08:29 0:38:00 0:24:56 0:13:04 smithi master centos 7.5 rados/singleton/{all/thrash_cache_writeback_proxy_none.yaml msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{centos_latest.yaml}} 2
fail 3605230 2019-02-18 03:09:33 2019-02-18 03:30:29 2019-02-18 04:40:29 1:10:00 0:56:26 0:13:34 smithi master ubuntu 18.04 rados/thrash-erasure-code/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/fast.yaml leveldb.yaml msgr-failures/fastclose.yaml objectstore/filestore-xfs.yaml rados.yaml recovery-overrides/{default.yaml} supported-random-distro$/{ubuntu_latest.yaml} thrashers/pggrow.yaml thrashosds-health.yaml workloads/ec-rados-plugin=jerasure-k=2-m=1.yaml} 2
Failure Reason:

Command failed on smithi194 with status 1: 'sudo ceph --cluster ceph osd crush tunables default'

fail 3605232 2019-02-18 03:09:34 2019-02-18 03:30:29 2019-02-18 04:40:29 1:10:00 0:56:04 0:13:56 smithi master ubuntu 16.04 rados/mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/filestore-xfs.yaml supported-random-distro$/{ubuntu_16.04.yaml} tasks/orchestrator_cli.yaml} 2
Failure Reason:

Command failed on smithi046 with status 1: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph --cluster ceph osd dump --format=json'

pass 3605234 2019-02-18 03:09:35 2019-02-18 03:30:31 2019-02-18 04:10:31 0:40:00 0:24:16 0:15:44 smithi master ubuntu 16.04 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/fastclose.yaml msgr/random.yaml objectstore/filestore-xfs.yaml rados.yaml rocksdb.yaml supported-random-distro$/{ubuntu_16.04.yaml} thrashers/mapgap.yaml thrashosds-health.yaml workloads/pool-snaps-few-objects.yaml} 2
fail 3605236 2019-02-18 03:09:36 2019-02-18 03:32:20 2019-02-18 03:56:19 0:23:59 0:14:12 0:09:47 smithi master centos 7.5 rados/singleton/{all/watch-notify-same-primary.yaml msgr-failures/many.yaml msgr/async-v2only.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{centos_latest.yaml}} 1
Failure Reason:

"2019-02-18 03:49:05.167178 mon.a (mon.0) 19 : cluster [WRN] Health check failed: 1/3 mons down, quorum a,b (MON_DOWN)" in cluster log

fail 3605238 2019-02-18 03:09:37 2019-02-18 03:32:36 2019-02-18 04:02:36 0:30:00 0:22:19 0:07:41 smithi master rhel 7.5 rados/singleton-nomsgr/{all/recovery-unfound-found.yaml rados.yaml supported-random-distro$/{rhel_latest.yaml}} 1
Failure Reason:

Command failed on smithi025 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph pg dump --format=json'

pass 3605240 2019-02-18 03:09:37 2019-02-18 03:34:17 2019-02-18 03:52:16 0:17:59 0:08:47 0:09:12 smithi master ubuntu 16.04 rados/perf/{ceph.yaml objectstore/filestore-xfs.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{ubuntu_16.04.yaml} workloads/radosbench_4K_seq_read.yaml} 1
fail 3605242 2019-02-18 03:09:38 2019-02-18 03:34:17 2019-02-18 04:44:17 1:10:00 0:58:43 0:11:17 smithi master centos 7.5 rados/singleton/{all/admin-socket.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{centos_latest.yaml}} 1
Failure Reason:

Command failed on smithi150 with status 1: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph --cluster ceph osd dump --format=json'

pass 3605244 2019-02-18 03:09:39 2019-02-18 03:34:17 2019-02-18 04:12:17 0:38:00 0:25:57 0:12:03 smithi master ubuntu 18.04 rados/singleton-bluestore/{all/cephtool.yaml msgr-failures/few.yaml msgr/random.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml}} 1
fail 3605246 2019-02-18 03:09:40 2019-02-18 03:34:29 2019-02-18 04:42:29 1:08:00 0:56:11 0:11:49 smithi master ubuntu 16.04 rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} mon_kv_backend/rocksdb.yaml msgr-failures/many.yaml msgr/async.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{ubuntu_16.04.yaml} tasks/rados_stress_watch.yaml} 2
Failure Reason:

Command failed on smithi199 with status 1: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph --cluster ceph osd dump --format=json'

pass 3605249 2019-02-18 03:09:41 2019-02-18 03:35:44 2019-02-18 04:11:43 0:35:59 0:22:14 0:13:45 smithi master ubuntu 16.04 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/few.yaml msgr/simple.yaml objectstore/bluestore-bitmap.yaml rados.yaml rocksdb.yaml supported-random-distro$/{ubuntu_16.04.yaml} thrashers/morepggrow.yaml thrashosds-health.yaml workloads/rados_api_tests.yaml} 2
pass 3605251 2019-02-18 03:09:42 2019-02-18 03:36:19 2019-02-18 03:54:18 0:17:59 0:08:02 0:09:57 smithi master ubuntu 18.04 rados/singleton/{all/deduptool.yaml msgr-failures/many.yaml msgr/random.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml}} 1
pass 3605253 2019-02-18 03:09:42 2019-02-18 03:36:20 2019-02-18 03:58:20 0:22:00 0:10:39 0:11:21 smithi master ubuntu 16.04 rados/perf/{ceph.yaml objectstore/bluestore-bitmap.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{ubuntu_16.04.yaml} workloads/radosbench_4M_rand_read.yaml} 1
pass 3605255 2019-02-18 03:09:43 2019-02-18 03:36:24 2019-02-18 05:48:25 2:12:01 1:14:08 0:57:53 smithi master centos 7.5 rados/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-install/luminous.yaml backoff/peering.yaml ceph.yaml clusters/{openstack.yaml three-plus-one.yaml} d-balancer/off.yaml distro$/{centos_latest.yaml} msgr-failures/few.yaml msgr/async-v1only.yaml rados.yaml rocksdb.yaml thrashers/morepggrow.yaml thrashosds-health.yaml workloads/radosbench.yaml} 4
fail 3605257 2019-02-18 03:09:44 2019-02-18 03:38:56 2019-02-18 04:10:56 0:32:00 0:14:56 0:17:04 smithi master ubuntu 18.04 rados/mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/bluestore-bitmap.yaml supported-random-distro$/{ubuntu_latest.yaml} tasks/progress.yaml} 2
Failure Reason:

"2019-02-18 04:00:15.099424 mon.a (mon.1) 29 : cluster [WRN] Health check failed: 1/3 mons down, quorum a,c (MON_DOWN)" in cluster log

dead 3605259 2019-02-18 03:09:45 2019-02-18 03:40:59 2019-02-18 15:43:25 12:02:26 smithi master ubuntu 16.04 rados/objectstore/{backends/alloc-hint.yaml supported-random-distro$/{ubuntu_16.04.yaml}} 1
dead 3605262 2019-02-18 03:09:45 2019-02-18 03:45:12 2019-02-18 15:47:38 12:02:26 smithi master rhel 7.5 rados/rest/{mgr-restful.yaml supported-random-distro$/{rhel_latest.yaml}} 1
fail 3605264 2019-02-18 03:09:46 2019-02-18 03:46:28 2019-02-18 05:00:28 1:14:00 1:03:57 0:10:03 smithi master centos rados/singleton-flat/valgrind-leaks.yaml 1
Failure Reason:

Command failed on smithi202 with status 1: 'sudo ceph --cluster ceph osd crush tunables default'