User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Pass | Fail | Dead |
---|---|---|---|---|---|---|---|---|---|---|---|
kchai | 2019-03-22 16:43:21 | 2019-03-22 16:43:40 | 2019-03-22 17:43:57 | 1:00:17 | rados | wip-kefu-testing-2019-03-22-2235 | mira | 26efca8 | 2 | 14 | 12 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
fail | 3761784 | 2019-03-22 16:43:32 | 2019-03-22 16:43:34 | 2019-03-22 17:41:34 | 0:58:00 | 0:46:13 | 0:11:47 | mira | master | centos | 7.5 | rados/singleton-nomsgr/{all/recovery-unfound-found.yaml rados.yaml supported-random-distro$/{centos_latest.yaml}} | 1 | |
Failure Reason:
'wait_until_healthy' reached maximum tries (150) after waiting for 900 seconds |
||||||||||||||
pass | 3761785 | 2019-03-22 16:43:33 | 2019-03-22 16:43:34 | 2019-03-22 17:35:34 | 0:52:00 | 0:39:58 | 0:12:02 | mira | master | ubuntu | 18.04 | rados/thrash-erasure-code/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/normal.yaml msgr-failures/osd-delay.yaml objectstore/bluestore-bitmap.yaml rados.yaml recovery-overrides/{default.yaml} supported-random-distro$/{ubuntu_latest.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/ec-radosbench.yaml} | 2 | |
fail | 3761786 | 2019-03-22 16:43:34 | 2019-03-22 16:43:35 | 2019-03-22 17:03:34 | 0:19:59 | 0:09:18 | 0:10:41 | mira | master | ubuntu | 18.04 | rados/perf/{ceph.yaml objectstore/filestore-xfs.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{ubuntu_latest.yaml} workloads/radosbench_4K_seq_read.yaml} | 1 | |
Failure Reason:
"2019-03-22 16:57:33.684169 mon.a (mon.0) 72 : cluster [WRN] Health check failed: 1 pool(s) full (POOL_FULL)" in cluster log |
||||||||||||||
fail | 3761787 | 2019-03-22 16:43:35 | 2019-03-22 16:43:36 | 2019-03-22 17:21:36 | 0:38:00 | 0:25:36 | 0:12:24 | mira | master | rhel | 7.5 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/fastclose.yaml msgr/async-v2only.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{rhel_latest.yaml} thrashers/pggrow.yaml thrashosds-health.yaml workloads/write_fadvise_dontneed.yaml} | 2 | |
Failure Reason:
Command crashed: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --write-fadvise-dontneed --max-ops 4000 --objects 500 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op read 100 --op write 50 --op write_excl 50 --op delete 10 --pool unique_pool_0' |
||||||||||||||
fail | 3761788 | 2019-03-22 16:43:36 | 2019-03-22 16:43:37 | 2019-03-22 17:19:37 | 0:36:00 | 0:24:03 | 0:11:57 | mira | master | rhel | 7.5 | rados/multimon/{clusters/9.yaml msgr-failures/many.yaml msgr/async.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{rhel_latest.yaml} tasks/mon_clock_no_skews.yaml} | 3 | |
Failure Reason:
"2019-03-22 17:16:13.279953 mon.c (mon.0) 52 : cluster [WRN] Health check failed: 1 pool(s) full (POOL_FULL)" in cluster log |
||||||||||||||
pass | 3761789 | 2019-03-22 16:43:37 | 2019-03-22 16:43:38 | 2019-03-22 17:07:37 | 0:23:59 | 0:13:22 | 0:10:37 | mira | master | ubuntu | 16.04 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{ubuntu_16.04.yaml} thrashers/careful.yaml thrashosds-health.yaml workloads/admin_socket_objecter_requests.yaml} | 2 | |
fail | 3761790 | 2019-03-22 16:43:38 | 2019-03-22 16:43:39 | 2019-03-22 17:11:38 | 0:27:59 | 0:14:46 | 0:13:13 | mira | master | ubuntu | 16.04 | rados/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-install/hammer.yaml backoff/normal.yaml ceph.yaml clusters/{openstack.yaml three-plus-one.yaml} d-balancer/crush-compat.yaml distro$/{ubuntu_16.04.yaml} msgr-failures/few.yaml rados.yaml thrashers/default.yaml thrashosds-health.yaml workloads/cache-snaps.yaml} | 4 | |
Failure Reason:
Command crashed: 'CEPH_CLIENT_ID=2 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --max-ops 4000 --objects 500 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op snap_remove 50 --op snap_create 50 --op rollback 50 --op read 100 --op copy_from 50 --op write 50 --op write_excl 50 --op cache_try_flush 50 --op cache_flush 50 --op cache_evict 50 --op delete 50 --pool base' |
||||||||||||||
fail | 3761791 | 2019-03-22 16:43:38 | 2019-03-22 16:43:40 | 2019-03-22 17:05:39 | 0:21:59 | 0:11:19 | 0:10:40 | mira | master | rados/verify/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-thrash/none.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore-bitmap.yaml rados.yaml tasks/mon_recovery.yaml validater/lockdep.yaml} | 2 | |||
Failure Reason:
"2019-03-22 16:59:13.684395 mon.b (mon.0) 92 : cluster [WRN] Health check failed: 1 pool(s) full (POOL_FULL)" in cluster log |
||||||||||||||
fail | 3761792 | 2019-03-22 16:43:39 | 2019-03-22 16:43:41 | 2019-03-22 17:09:40 | 0:25:59 | 0:14:35 | 0:11:24 | mira | master | centos | 7.5 | rados/singleton/{all/dump-stuck.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{centos_latest.yaml}} | 1 | |
Failure Reason:
"2019-03-22 17:05:11.674994 mon.a (mon.0) 59 : cluster [WRN] Health check failed: 1 pool(s) full (POOL_FULL)" in cluster log |
||||||||||||||
fail | 3761793 | 2019-03-22 16:43:40 | 2019-03-22 16:43:42 | 2019-03-22 17:05:41 | 0:21:59 | 0:10:30 | 0:11:29 | mira | master | ubuntu | 16.04 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/osd-delay.yaml msgr/async-v1only.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{ubuntu_16.04.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/cache-agent-big.yaml} | 2 | |
Failure Reason:
Command crashed: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --max-ops 10000 --objects 6600 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 1200 --op read 100 --op write 50 --op copy_from 50 --op write_excl 50 --op delete 50 --pool base' |
||||||||||||||
fail | 3761794 | 2019-03-22 16:43:41 | 2019-03-22 16:43:43 | 2019-03-22 17:23:42 | 0:39:59 | 0:26:30 | 0:13:29 | mira | master | ubuntu | 16.04 | rados/mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/bluestore-comp.yaml supported-random-distro$/{ubuntu_16.04.yaml} tasks/progress.yaml} | 2 | |
Failure Reason:
"2019-03-22 17:03:14.082375 mon.a (mon.0) 100 : cluster [WRN] Health check failed: 1 pool(s) full (POOL_FULL)" in cluster log |
||||||||||||||
fail | 3761795 | 2019-03-22 16:43:42 | 2019-03-22 16:43:44 | 2019-03-22 17:39:44 | 0:56:00 | 0:19:04 | 0:36:56 | mira | master | centos | 7.5 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/fastclose.yaml msgr/async-v2only.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{centos_latest.yaml} thrashers/mapgap.yaml thrashosds-health.yaml workloads/cache-agent-small.yaml} | 2 | |
Failure Reason:
Command crashed: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --max-ops 4000 --objects 500 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op read 100 --op write 50 --op copy_from 50 --op write_excl 50 --op delete 50 --pool base' |
||||||||||||||
fail | 3761796 | 2019-03-22 16:43:43 | 2019-03-22 16:43:45 | 2019-03-22 17:33:45 | 0:50:00 | 0:17:22 | 0:32:38 | mira | master | ubuntu | 18.04 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} thrashers/morepggrow.yaml thrashosds-health.yaml workloads/cache-pool-snaps-readproxy.yaml} | 2 | |
Failure Reason:
Command crashed: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --pool-snaps --max-ops 4000 --objects 500 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op snap_remove 50 --op snap_create 50 --op rollback 50 --op read 100 --op copy_from 50 --op write 50 --op write_excl 50 --op cache_try_flush 50 --op cache_flush 50 --op cache_evict 50 --op delete 50 --pool base' |
||||||||||||||
fail | 3761797 | 2019-03-22 16:43:44 | 2019-03-22 17:03:36 | 2019-03-22 17:37:35 | 0:33:59 | 0:18:05 | 0:15:54 | mira | master | centos | 7.5 | rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{centos_latest.yaml} tasks/scrub_test.yaml} | 2 | |
Failure Reason:
"2019-03-22 17:29:28.347244 mon.b (mon.0) 170 : cluster [WRN] Health check failed: 1 pool(s) full (POOL_FULL)" in cluster log |
||||||||||||||
fail | 3761798 | 2019-03-22 16:43:45 | 2019-03-22 17:05:52 | 2019-03-22 17:43:52 | 0:38:00 | 0:24:02 | 0:13:58 | mira | master | rhel | 7.5 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/osd-delay.yaml msgr/async-v1only.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{rhel_latest.yaml} thrashers/none.yaml thrashosds-health.yaml workloads/cache-pool-snaps.yaml} | 2 | |
Failure Reason:
Command crashed: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --pool-snaps --max-ops 4000 --objects 500 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op snap_remove 50 --op snap_create 50 --op rollback 50 --op read 100 --op copy_from 50 --op write 50 --op write_excl 50 --op cache_try_flush 50 --op cache_flush 50 --op cache_evict 50 --op delete 50 --pool base' |
||||||||||||||
dead | 3761799 | 2019-03-22 16:43:46 | 2019-03-22 17:05:52 | 2019-03-22 17:43:52 | 0:38:00 | mira | master | rhel | 7.5 | rados/singleton/{all/ec-lost-unfound.yaml msgr-failures/many.yaml msgr/async.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{rhel_latest.yaml}} | 1 | |||
dead | 3761800 | 2019-03-22 16:43:47 | 2019-03-22 17:07:39 | 2019-03-22 17:43:38 | 0:35:59 | mira | master | rhel | 7.5 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/fastclose.yaml msgr/async-v2only.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{rhel_latest.yaml} thrashers/pggrow.yaml thrashosds-health.yaml workloads/cache-snaps.yaml} | 2 | |||
fail | 3761801 | 2019-03-22 16:43:48 | 2019-03-22 17:09:42 | 2019-03-22 17:41:41 | 0:31:59 | 0:17:34 | 0:14:25 | mira | master | centos | 7.5 | rados/perf/{ceph.yaml objectstore/bluestore-bitmap.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{centos_latest.yaml} workloads/radosbench_4M_rand_read.yaml} | 1 | |
Failure Reason:
"2019-03-22 17:34:07.563459 mon.a (mon.0) 78 : cluster [WRN] Health check failed: 1 pool(s) full (POOL_FULL)" in cluster log |
||||||||||||||
dead | 3761802 | 2019-03-22 16:43:48 | 2019-03-22 17:11:40 | 2019-03-22 17:43:39 | 0:31:59 | mira | master | centos | 7.5 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{centos_latest.yaml} thrashers/careful.yaml thrashosds-health.yaml workloads/cache.yaml} | 2 | |||
dead | 3761803 | 2019-03-22 16:43:49 | 2019-03-22 17:19:46 | 2019-03-22 17:43:45 | 0:23:59 | mira | master | ubuntu | 16.04 | rados/monthrash/{ceph.yaml clusters/3-mons.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{ubuntu_16.04.yaml} thrashers/one.yaml workloads/rados_mon_osdmap_prune.yaml} | 2 | |||
dead | 3761804 | 2019-03-22 16:43:50 | 2019-03-22 17:21:37 | 2019-03-22 17:43:37 | 0:22:00 | mira | master | centos | 7.5 | rados/thrash-erasure-code-big/{ceph.yaml cluster/{12-osds.yaml openstack.yaml} msgr-failures/few.yaml objectstore/bluestore-bitmap.yaml rados.yaml recovery-overrides/{more-active-recovery.yaml} supported-random-distro$/{centos_latest.yaml} thrashers/mapgap.yaml thrashosds-health.yaml workloads/ec-rados-plugin=jerasure-k=4-m=2.yaml} | 3 | |||
dead | 3761805 | 2019-03-22 16:43:51 | 2019-03-22 17:23:58 | 2019-03-22 17:43:57 | 0:19:59 | mira | master | ubuntu | 18.04 | rados/thrash-erasure-code-isa/{arch/x86_64.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/few.yaml objectstore/bluestore-bitmap.yaml rados.yaml recovery-overrides/{default.yaml} supported-random-distro$/{ubuntu_latest.yaml} thrashers/morepggrow.yaml thrashosds-health.yaml workloads/ec-rados-plugin=isa-k=2-m=1.yaml} | 2 | |||
dead | 3761806 | 2019-03-22 16:43:52 | 2019-03-22 17:33:46 | 2019-03-22 17:43:45 | 0:09:59 | mira | master | ubuntu | 16.04 | rados/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-install/jewel-v1only.yaml backoff/peering.yaml ceph.yaml clusters/{openstack.yaml three-plus-one.yaml} d-balancer/off.yaml distro$/{ubuntu_16.04.yaml} msgr-failures/osd-delay.yaml rados.yaml thrashers/mapgap.yaml thrashosds-health.yaml workloads/radosbench.yaml} | — | |||
dead | 3761807 | 2019-03-22 16:43:53 | 2019-03-22 17:35:36 | 2019-03-22 17:43:35 | 0:07:59 | mira | master | ubuntu | 16.04 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/osd-delay.yaml msgr/async-v1only.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{ubuntu_16.04.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/pool-snaps-few-objects.yaml} | 2 | |||
dead | 3761808 | 2019-03-22 16:43:54 | 2019-03-22 17:37:37 | 2019-03-22 17:43:36 | 0:05:59 | mira | master | ubuntu | 18.04 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/fastclose.yaml msgr/async-v2only.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} thrashers/mapgap.yaml thrashosds-health.yaml workloads/rados_api_tests.yaml} | 2 | |||
dead | 3761809 | 2019-03-22 16:43:54 | 2019-03-22 17:39:45 | 2019-03-22 17:43:44 | 0:03:59 | mira | master | ubuntu | 16.04 | rados/singleton/{all/erasure-code-nonregression.yaml msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{ubuntu_16.04.yaml}} | — | |||
dead | 3761810 | 2019-03-22 16:43:55 | 2019-03-22 17:41:35 | 2019-03-22 17:43:34 | 0:01:59 | mira | master | rhel | 7.5 | rados/objectstore/{backends/alloc-hint.yaml supported-random-distro$/{rhel_latest.yaml}} | — | |||
dead | 3761811 | 2019-03-22 16:43:56 | 2019-03-22 17:41:42 | 2019-03-22 17:43:41 | 0:01:59 | mira | master | ubuntu | 16.04 | rados/rest/{mgr-restful.yaml supported-random-distro$/{ubuntu_16.04.yaml}} | — |