Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
pass 3604418 2019-02-17 17:29:37 2019-02-17 17:31:14 2019-02-17 17:55:14 0:24:00 0:16:48 0:07:12 smithi master rhel 7.5 rados/perf/{ceph.yaml objectstore/bluestore-stupid.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{rhel_latest.yaml} workloads/radosbench_4K_rand_read.yaml} 1
fail 3604419 2019-02-17 17:29:38 2019-02-17 17:31:15 2019-02-17 18:45:15 1:14:00 1:00:46 0:13:14 smithi master centos 7.5 rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} mon_kv_backend/leveldb.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{centos_latest.yaml} tasks/rados_python.yaml} 2
Failure Reason:

"2019-02-17 17:52:01.028094 mon.b (mon.0) 37 : cluster [WRN] Health check failed: 1/3 mons down, quorum b,a (MON_DOWN)" in cluster log

pass 3604420 2019-02-17 17:29:39 2019-02-17 17:32:45 2019-02-17 22:24:49 4:52:04 4:38:55 0:13:09 smithi master ubuntu 16.04 rados/objectstore/{backends/objectstore.yaml supported-random-distro$/{ubuntu_16.04.yaml}} 1
fail 3604421 2019-02-17 17:29:40 2019-02-17 17:32:45 2019-02-17 18:30:45 0:58:00 0:46:20 0:11:40 smithi master ubuntu 16.04 rados/thrash-erasure-code-overwrites/{bluestore-bitmap.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/fast.yaml leveldb.yaml msgr-failures/fastclose.yaml rados.yaml recovery-overrides/{default.yaml} supported-random-distro$/{ubuntu_16.04.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/ec-pool-snaps-few-objects-overwrites.yaml} 2
Failure Reason:

Command crashed: 'sudo ceph --cluster ceph osd crush tunables default'

fail 3604422 2019-02-17 17:29:41 2019-02-17 17:32:54 2019-02-17 18:02:53 0:29:59 0:13:51 0:16:08 smithi master centos 7.5 rados/thrash-erasure-code-shec/{ceph.yaml clusters/{fixed-4.yaml openstack.yaml} leveldb.yaml msgr-failures/fastclose.yaml objectstore/bluestore-bitmap.yaml rados.yaml recovery-overrides/{more-active-recovery.yaml} supported-random-distro$/{centos_latest.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/ec-rados-plugin=shec-k=4-m=3-c=2.yaml} 4
Failure Reason:

"2019-02-17 17:57:02.199578 mon.c (mon.0) 19 : cluster [WRN] Health check failed: 1/3 mons down, quorum c,a (MON_DOWN)" in cluster log

fail 3604423 2019-02-17 17:29:41 2019-02-17 17:32:57 2019-02-17 17:56:57 0:24:00 0:12:44 0:11:16 smithi master rados/verify/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-thrash/none.yaml mon_kv_backend/rocksdb.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-bitmap.yaml rados.yaml tasks/mon_recovery.yaml validater/lockdep.yaml} 2
Failure Reason:

"2019-02-17 17:47:12.804473 mon.a (mon.1) 199 : cluster [WRN] Health check failed: 4 osds down (OSD_DOWN)" in cluster log

pass 3604424 2019-02-17 17:29:42 2019-02-17 17:33:08 2019-02-17 18:05:07 0:31:59 0:20:03 0:11:56 smithi master ubuntu 18.04 rados/singleton/{all/thrash_cache_writeback_proxy_none.yaml msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml}} 2
fail 3604425 2019-02-17 17:29:43 2019-02-17 17:35:10 2019-02-17 21:33:14 3:58:04 3:46:06 0:11:58 smithi master ubuntu 18.04 rados/thrash-erasure-code/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/fast.yaml leveldb.yaml msgr-failures/fastclose.yaml objectstore/filestore-xfs.yaml rados.yaml recovery-overrides/{default.yaml} supported-random-distro$/{ubuntu_latest.yaml} thrashers/pggrow.yaml thrashosds-health.yaml workloads/ec-rados-plugin=jerasure-k=2-m=1.yaml} 2
Failure Reason:

"2019-02-17 17:49:45.047399 mon.b (mon.0) 18 : cluster [WRN] Health check failed: 1/3 mons down, quorum b,c (MON_DOWN)" in cluster log

fail 3604426 2019-02-17 17:29:44 2019-02-17 17:35:11 2019-02-17 18:05:10 0:29:59 0:22:06 0:07:53 smithi master rhel 7.5 rados/mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/filestore-xfs.yaml supported-random-distro$/{rhel_latest.yaml} tasks/orchestrator_cli.yaml} 2
Failure Reason:

"2019-02-17 17:52:16.339665 mon.a (mon.0) 66 : cluster [WRN] Health check failed: 1/3 mons down, quorum a,c (MON_DOWN)" in cluster log

pass 3604427 2019-02-17 17:29:44 2019-02-17 17:35:10 2019-02-17 18:15:10 0:40:00 0:27:32 0:12:28 smithi master centos 7.5 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/fastclose.yaml msgr/random.yaml objectstore/filestore-xfs.yaml rados.yaml rocksdb.yaml supported-random-distro$/{centos_latest.yaml} thrashers/mapgap.yaml thrashosds-health.yaml workloads/pool-snaps-few-objects.yaml} 2
fail 3604428 2019-02-17 17:29:45 2019-02-17 17:37:06 2019-02-17 17:59:06 0:22:00 0:11:04 0:10:56 smithi master centos 7.5 rados/singleton/{all/watch-notify-same-primary.yaml msgr-failures/many.yaml msgr/async-v2only.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{centos_latest.yaml}} 1
Failure Reason:

"2019-02-17 17:53:47.495780 mon.b (mon.1) 10 : cluster [WRN] Health check failed: 1/3 mons down, quorum b,c (MON_DOWN)" in cluster log

fail 3604429 2019-02-17 17:29:46 2019-02-17 17:37:06 2019-02-17 18:07:06 0:30:00 0:20:05 0:09:55 smithi master ubuntu 18.04 rados/singleton-nomsgr/{all/recovery-unfound-found.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml}} 1
Failure Reason:

Command failed on smithi178 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph pg dump --format=json'

pass 3604430 2019-02-17 17:29:47 2019-02-17 17:37:06 2019-02-17 17:55:06 0:18:00 0:09:26 0:08:34 smithi master ubuntu 18.04 rados/perf/{ceph.yaml objectstore/filestore-xfs.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{ubuntu_latest.yaml} workloads/radosbench_4K_seq_read.yaml} 1
fail 3604431 2019-02-17 17:29:47 2019-02-17 17:37:08 2019-02-17 18:45:08 1:08:00 1:01:34 0:06:26 smithi master rhel 7.5 rados/singleton/{all/admin-socket.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{rhel_latest.yaml}} 1
Failure Reason:

Command failed on smithi058 with status 1: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph --cluster ceph osd dump --format=json'

pass 3604432 2019-02-17 17:29:48 2019-02-17 17:38:53 2019-02-17 18:12:52 0:33:59 0:26:29 0:07:30 smithi master ubuntu 16.04 rados/singleton-bluestore/{all/cephtool.yaml msgr-failures/few.yaml msgr/random.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{ubuntu_16.04.yaml}} 1
dead 3604433 2019-02-17 17:29:49 2019-02-17 17:39:00 2019-02-18 05:41:22 12:02:22 smithi master centos 7.5 rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} mon_kv_backend/rocksdb.yaml msgr-failures/many.yaml msgr/async.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{centos_latest.yaml} tasks/rados_stress_watch.yaml} 2
pass 3604434 2019-02-17 17:29:50 2019-02-17 17:40:43 2019-02-17 18:14:43 0:34:00 0:27:50 0:06:10 smithi master rhel 7.5 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/few.yaml msgr/simple.yaml objectstore/bluestore-bitmap.yaml rados.yaml rocksdb.yaml supported-random-distro$/{rhel_latest.yaml} thrashers/morepggrow.yaml thrashosds-health.yaml workloads/rados_api_tests.yaml} 2
pass 3604435 2019-02-17 17:29:50 2019-02-17 17:40:45 2019-02-17 17:58:45 0:18:00 0:08:28 0:09:32 smithi master ubuntu 18.04 rados/singleton/{all/deduptool.yaml msgr-failures/many.yaml msgr/random.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml}} 1
pass 3604436 2019-02-17 17:29:51 2019-02-17 17:40:54 2019-02-17 18:04:54 0:24:00 0:17:33 0:06:27 smithi master rhel 7.5 rados/perf/{ceph.yaml objectstore/bluestore-bitmap.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{rhel_latest.yaml} workloads/radosbench_4M_rand_read.yaml} 1
pass 3604437 2019-02-17 17:29:52 2019-02-17 17:40:57 2019-02-17 19:08:58 1:28:01 1:15:29 0:12:32 smithi master ubuntu 16.04 rados/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-install/luminous.yaml backoff/peering.yaml ceph.yaml clusters/{openstack.yaml three-plus-one.yaml} d-balancer/off.yaml distro$/{ubuntu_16.04.yaml} msgr-failures/few.yaml msgr/async-v1only.yaml rados.yaml rocksdb.yaml thrashers/morepggrow.yaml thrashosds-health.yaml workloads/radosbench.yaml} 4
dead 3604438 2019-02-17 17:29:53 2019-02-17 17:41:00 2019-02-18 05:43:27 12:02:27 11:50:36 0:11:51 smithi master ubuntu 16.04 rados/mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/bluestore-bitmap.yaml supported-random-distro$/{ubuntu_16.04.yaml} tasks/progress.yaml} 2
Failure Reason:

"2019-02-17 17:56:05.202021 mon.b (mon.0) 89 : cluster [WRN] Health check failed: 1/3 mons down, quorum b,a (MON_DOWN)" in cluster log

fail 3604439 2019-02-17 17:29:53 2019-02-17 17:41:01 2019-02-17 18:47:01 1:06:00 0:55:54 0:10:06 smithi master ubuntu 18.04 rados/objectstore/{backends/alloc-hint.yaml supported-random-distro$/{ubuntu_latest.yaml}} 1
Failure Reason:

Command failed on smithi049 with status 1: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph --cluster ceph osd dump --format=json'

fail 3604440 2019-02-17 17:29:54 2019-02-17 17:41:04 2019-02-17 18:19:04 0:38:00 0:30:58 0:07:02 smithi master rhel 7.5 rados/rest/{mgr-restful.yaml supported-random-distro$/{rhel_latest.yaml}} 1
Failure Reason:

Command failed on smithi187 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph osd dump --format=json'