User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Pass | Fail | Dead |
---|---|---|---|---|---|---|---|---|---|---|---|
sage | 2019-03-25 09:57:19 | 2019-03-25 11:11:26 | 2019-03-25 11:23:25 | 0:11:59 | rados | wip-sage-testing-2019-03-24-1032 | smithi | 51512c6 | 4 | 44 | 44 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
dead | 3771107 | 2019-03-25 09:57:37 | 2019-03-25 09:58:53 | 2019-03-25 11:18:53 | 1:20:00 | smithi | master | ubuntu | 16.04 | rados/thrash-erasure-code-shec/{ceph.yaml clusters/{fixed-4.yaml openstack.yaml} msgr-failures/fastclose.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml recovery-overrides/{default.yaml} supported-random-distro$/{ubuntu_16.04.yaml} thrashers/careful.yaml thrashosds-health.yaml workloads/ec-rados-plugin=shec-k=4-m=3-c=2.yaml} | — | |||
fail | 3771108 | 2019-03-25 09:57:38 | 2019-03-25 09:59:42 | 2019-03-25 10:31:41 | 0:31:59 | 0:22:51 | 0:09:08 | smithi | master | ubuntu | 16.04 | rados/singleton/{all/ec-lost-unfound.yaml msgr-failures/many.yaml msgr/async-v2only.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{ubuntu_16.04.yaml}} | 1 | |
Failure Reason:
"2019-03-25 10:11:17.741128 mon.a (mon.0) 71 : cluster [WRN] Health check failed: 1 pool(s) full (POOL_FULL)" in cluster log |
||||||||||||||
fail | 3771109 | 2019-03-25 09:57:39 | 2019-03-25 10:00:11 | 2019-03-25 10:34:11 | 0:34:00 | 0:21:10 | 0:12:50 | smithi | master | centos | 7.5 | rados/monthrash/{ceph.yaml clusters/9-mons.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{centos_latest.yaml} thrashers/force-sync-many.yaml workloads/pool-create-delete.yaml} | 2 | |
Failure Reason:
"2019-03-25 10:22:10.612041 mon.f (mon.0) 83 : cluster [WRN] Health check failed: 1 pool(s) full (POOL_FULL)" in cluster log |
||||||||||||||
fail | 3771110 | 2019-03-25 09:57:40 | 2019-03-25 10:00:12 | 2019-03-25 10:22:11 | 0:21:59 | 0:14:30 | 0:07:29 | smithi | master | rhel | 7.5 | rados/singleton-nomsgr/{all/health-warnings.yaml rados.yaml supported-random-distro$/{rhel_latest.yaml}} | 1 | |
Failure Reason:
"2019-03-25 10:16:38.043253 mon.a (mon.0) 205 : cluster [WRN] Health check failed: 1 pool(s) full (POOL_FULL)" in cluster log |
||||||||||||||
dead | 3771111 | 2019-03-25 09:57:41 | 2019-03-25 10:01:41 | 2019-03-25 11:17:41 | 1:16:00 | smithi | master | ubuntu | 18.04 | rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/many.yaml msgr/async-v1only.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} tasks/rgw_snaps.yaml} | 2 | |||
fail | 3771112 | 2019-03-25 09:57:41 | 2019-03-25 10:01:41 | 2019-03-25 10:49:41 | 0:48:00 | 0:08:04 | 0:39:56 | smithi | master | ubuntu | 16.04 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{ubuntu_16.04.yaml} thrashers/careful.yaml thrashosds-health.yaml workloads/pool-snaps-few-objects.yaml} | 2 | |
Failure Reason:
Command crashed: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --pool-snaps --max-ops 4000 --objects 50 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op snap_remove 50 --op snap_create 50 --op rollback 50 --op read 100 --op copy_from 50 --op write 50 --op write_excl 50 --op delete 50 --pool unique_pool_0' |
||||||||||||||
fail | 3771113 | 2019-03-25 09:57:42 | 2019-03-25 10:01:41 | 2019-03-25 10:25:41 | 0:24:00 | 0:10:21 | 0:13:39 | smithi | master | ubuntu | 18.04 | rados/perf/{ceph.yaml objectstore/bluestore-stupid.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{ubuntu_latest.yaml} workloads/radosbench_4M_write.yaml} | 1 | |
Failure Reason:
"2019-03-25 10:19:11.919555 mon.a (mon.0) 84 : cluster [WRN] Health check failed: 1 pool(s) full (POOL_FULL)" in cluster log |
||||||||||||||
fail | 3771114 | 2019-03-25 09:57:43 | 2019-03-25 10:01:54 | 2019-03-25 10:25:54 | 0:24:00 | 0:14:40 | 0:09:20 | smithi | master | rhel | 7.5 | rados/thrash-erasure-code-isa/{arch/x86_64.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/few.yaml objectstore/bluestore-comp.yaml rados.yaml recovery-overrides/{more-active-recovery.yaml} supported-random-distro$/{rhel_latest.yaml} thrashers/mapgap.yaml thrashosds-health.yaml workloads/ec-rados-plugin=isa-k=2-m=1.yaml} | 2 | |
Failure Reason:
Command crashed: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --no-omap --ec-pool --max-ops 4000 --objects 50 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op snap_remove 50 --op snap_create 50 --op rollback 50 --op setattr 25 --op read 100 --op copy_from 50 --op write 0 --op rmattr 25 --op append 100 --op delete 50 --pool unique_pool_0' |
||||||||||||||
pass | 3771115 | 2019-03-25 09:57:44 | 2019-03-25 10:03:18 | 2019-03-25 10:19:18 | 0:16:00 | 0:06:45 | 0:09:15 | smithi | master | ubuntu | 18.04 | rados/singleton/{all/erasure-code-nonregression.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml}} | 1 | |
fail | 3771116 | 2019-03-25 09:57:44 | 2019-03-25 10:05:50 | 2019-03-25 11:15:50 | 1:10:00 | 0:25:03 | 0:44:57 | smithi | master | ubuntu | 16.04 | rados/mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/bluestore-bitmap.yaml supported-random-distro$/{ubuntu_16.04.yaml} tasks/ssh_orchestrator.yaml} | 2 | |
Failure Reason:
"2019-03-25 10:54:05.954733 mon.a (mon.0) 73 : cluster [WRN] Health check failed: 1 pool(s) full (POOL_FULL)" in cluster log |
||||||||||||||
dead | 3771117 | 2019-03-25 09:57:45 | 2019-03-25 10:06:24 | 2019-03-25 11:18:25 | 1:12:01 | 0:59:13 | 0:12:48 | smithi | master | ubuntu | 18.04 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/osd-delay.yaml msgr/async.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/rados_api_tests.yaml} | 2 | |
Failure Reason:
psutil.NoSuchProcess process no longer exists (pid=8507) |
||||||||||||||
fail | 3771118 | 2019-03-25 09:57:46 | 2019-03-25 10:07:25 | 2019-03-25 11:19:27 | 1:12:02 | smithi | master | centos | 7.5 | rados/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-install/nautilus.yaml backoff/normal.yaml ceph.yaml clusters/{openstack.yaml three-plus-one.yaml} d-balancer/off.yaml distro$/{centos_latest.yaml} msgr-failures/few.yaml rados.yaml thrashers/default.yaml thrashosds-health.yaml workloads/radosbench.yaml} | 4 | |||
Failure Reason:
psutil.NoSuchProcess process no longer exists (pid=7265) |
||||||||||||||
dead | 3771119 | 2019-03-25 09:57:47 | 2019-03-25 10:08:55 | 2019-03-25 11:18:56 | 1:10:01 | smithi | master | centos | 7.5 | rados/singleton-bluestore/{all/cephtool.yaml msgr-failures/many.yaml msgr/async-v2only.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{centos_latest.yaml}} | 1 | |||
fail | 3771120 | 2019-03-25 09:57:47 | 2019-03-25 10:09:37 | 2019-03-25 10:41:38 | 0:32:01 | 0:22:18 | 0:09:43 | smithi | master | ubuntu | 18.04 | rados/singleton/{all/lost-unfound-delete.yaml msgr-failures/many.yaml msgr/async-v1only.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml}} | 1 | |
Failure Reason:
"2019-03-25 10:23:46.961842 mon.a (mon.0) 62 : cluster [WRN] Health check failed: 1 pool(s) full (POOL_FULL)" in cluster log |
||||||||||||||
fail | 3771121 | 2019-03-25 09:57:48 | 2019-03-25 10:10:09 | 2019-03-25 10:28:08 | 0:17:59 | 0:08:15 | 0:09:44 | smithi | master | ubuntu | 16.04 | rados/thrash-erasure-code/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/fast.yaml msgr-failures/few.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml recovery-overrides/{default.yaml} supported-random-distro$/{ubuntu_16.04.yaml} thrashers/morepggrow.yaml thrashosds-health.yaml workloads/ec-small-objects.yaml} | 2 | |
Failure Reason:
Command crashed: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --no-omap --ec-pool --max-ops 400000 --objects 1024 --max-in-flight 64 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 600 --op snap_remove 50 --op snap_create 50 --op rollback 50 --op append_excl 50 --op setattr 25 --op read 100 --op copy_from 50 --op write 0 --op write_excl 0 --op rmattr 25 --op append 50 --op delete 50 --pool unique_pool_0' |
||||||||||||||
fail | 3771122 | 2019-03-25 09:57:49 | 2019-03-25 10:10:09 | 2019-03-25 10:32:08 | 0:21:59 | 0:07:29 | 0:14:30 | smithi | master | ubuntu | 18.04 | rados/multimon/{clusters/3.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} tasks/mon_clock_with_skews.yaml} | 2 | |
Failure Reason:
"2019-03-25 10:27:48.650990 mon.b (mon.0) 57 : cluster [WRN] Health check failed: 1 pool(s) full (POOL_FULL)" in cluster log |
||||||||||||||
dead | 3771123 | 2019-03-25 09:57:50 | 2019-03-25 10:10:11 | 2019-03-25 11:18:11 | 1:08:00 | smithi | master | centos | rados/verify/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-thrash/none.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/filestore-xfs.yaml rados.yaml tasks/rados_api_tests.yaml validater/valgrind.yaml} | 2 | ||||
dead | 3771124 | 2019-03-25 09:57:51 | 2019-03-25 10:11:33 | 2019-03-25 11:17:33 | 1:06:00 | smithi | master | rhel | 7.5 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/fastclose.yaml msgr/async-v1only.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{rhel_latest.yaml} thrashers/mapgap.yaml thrashosds-health.yaml workloads/radosbench.yaml} | 2 | |||
dead | 3771125 | 2019-03-25 09:57:52 | 2019-03-25 10:12:13 | 2019-03-25 11:18:18 | 1:06:05 | smithi | master | centos | 7.5 | rados/thrash-erasure-code-big/{ceph.yaml cluster/{12-osds.yaml openstack.yaml} msgr-failures/few.yaml objectstore/filestore-xfs.yaml rados.yaml recovery-overrides/{default.yaml} supported-random-distro$/{centos_latest.yaml} thrashers/mapgap.yaml thrashosds-health.yaml workloads/ec-rados-plugin=jerasure-k=4-m=2.yaml} | 3 | |||
fail | 3771126 | 2019-03-25 09:57:53 | 2019-03-25 10:13:25 | 2019-03-25 10:37:24 | 0:23:59 | 0:11:27 | 0:12:32 | smithi | master | ubuntu | 16.04 | rados/perf/{ceph.yaml objectstore/filestore-xfs.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{ubuntu_16.04.yaml} workloads/sample_fio.yaml} | 1 | |
Failure Reason:
"2019-03-25 10:28:46.538980 mon.a (mon.0) 79 : cluster [WRN] Health check failed: 1 pool(s) full (POOL_FULL)" in cluster log |
||||||||||||||
fail | 3771127 | 2019-03-25 09:57:53 | 2019-03-25 10:15:45 | 2019-03-25 10:57:45 | 0:42:00 | 0:25:43 | 0:16:17 | smithi | master | centos | 7.5 | rados/singleton/{all/lost-unfound.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{centos_latest.yaml}} | 1 | |
Failure Reason:
"2019-03-25 10:38:48.380189 mon.a (mon.0) 74 : cluster [WRN] Health check failed: 1 pool(s) full (POOL_FULL)" in cluster log |
||||||||||||||
fail | 3771128 | 2019-03-25 09:57:54 | 2019-03-25 10:15:45 | 2019-03-25 10:37:44 | 0:21:59 | 0:13:34 | 0:08:25 | smithi | master | rhel | 7.5 | rados/singleton-nomsgr/{all/large-omap-object-warnings.yaml rados.yaml supported-random-distro$/{rhel_latest.yaml}} | 1 | |
Failure Reason:
"2019-03-25 10:33:54.240404 mon.a (mon.0) 64 : cluster [WRN] Health check failed: 1 pool(s) full (POOL_FULL)" in cluster log |
||||||||||||||
fail | 3771129 | 2019-03-25 09:57:55 | 2019-03-25 10:16:36 | 2019-03-25 10:46:35 | 0:29:59 | 0:14:28 | 0:15:31 | smithi | master | centos | 7.5 | rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{centos_latest.yaml} tasks/scrub_test.yaml} | 2 | |
Failure Reason:
"2019-03-25 10:41:00.022014 mon.b (mon.0) 91 : cluster [WRN] Health check failed: 1 pool(s) full (POOL_FULL)" in cluster log |
||||||||||||||
pass | 3771130 | 2019-03-25 09:57:56 | 2019-03-25 10:18:07 | 2019-03-25 10:34:06 | 0:15:59 | 0:06:10 | 0:09:49 | smithi | master | ubuntu | 16.04 | rados/objectstore/{backends/fusestore.yaml supported-random-distro$/{ubuntu_16.04.yaml}} | 1 | |
fail | 3771131 | 2019-03-25 09:57:56 | 2019-03-25 10:18:37 | 2019-03-25 10:46:36 | 0:27:59 | 0:08:46 | 0:19:13 | smithi | master | ubuntu | 18.04 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} thrashers/morepggrow.yaml thrashosds-health.yaml workloads/redirect.yaml} | 2 | |
Failure Reason:
Command crashed: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --set_redirect --low_tier_pool low_tier --max-ops 4000 --objects 500 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op read 100 --op write 50 --op copy_from 50 --op write_excl 50 --op delete 50 --pool unique_pool_0' |
||||||||||||||
fail | 3771132 | 2019-03-25 09:57:57 | 2019-03-25 10:19:31 | 2019-03-25 10:43:30 | 0:23:59 | 0:10:11 | 0:13:48 | smithi | master | centos | 7.5 | rados/singleton/{all/max-pg-per-osd.from-mon.yaml msgr-failures/many.yaml msgr/async.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{centos_latest.yaml}} | 1 | |
Failure Reason:
"2019-03-25 10:40:28.086707 mon.a (mon.0) 70 : cluster [WRN] Health check failed: 1 pool(s) full (POOL_FULL)" in cluster log |
||||||||||||||
fail | 3771133 | 2019-03-25 09:57:58 | 2019-03-25 10:19:31 | 2019-03-25 10:59:31 | 0:40:00 | 0:25:04 | 0:14:56 | smithi | master | ubuntu | 18.04 | rados/mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/bluestore-comp.yaml supported-random-distro$/{ubuntu_latest.yaml} tasks/workunits.yaml} | 2 | |
Failure Reason:
"2019-03-25 10:38:42.895267 mon.a (mon.0) 78 : cluster [WRN] Health check failed: 1 pool(s) full (POOL_FULL)" in cluster log |
||||||||||||||
fail | 3771134 | 2019-03-25 09:57:59 | 2019-03-25 10:22:04 | 2019-03-25 11:00:04 | 0:38:00 | 0:29:21 | 0:08:39 | smithi | master | ubuntu | 18.04 | rados/standalone/{supported-random-distro$/{ubuntu_latest.yaml} workloads/mon.yaml} | 1 | |
Failure Reason:
Command failed (workunit test mon/mon-osdmap-prune.sh) on smithi071 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=51512c6307a29c298db33e79dee5de0af732b3d3 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/standalone/mon/mon-osdmap-prune.sh' |
||||||||||||||
dead | 3771135 | 2019-03-25 09:57:59 | 2019-03-25 10:22:12 | 2019-03-25 11:18:12 | 0:56:00 | smithi | master | centos | 7.5 | rados/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-install/hammer.yaml backoff/peering.yaml ceph.yaml clusters/{openstack.yaml three-plus-one.yaml} d-balancer/crush-compat.yaml distro$/{centos_latest.yaml} msgr-failures/osd-delay.yaml rados.yaml thrashers/mapgap.yaml thrashosds-health.yaml workloads/rbd_cls.yaml} | — | |||
fail | 3771136 | 2019-03-25 09:58:00 | 2019-03-25 10:25:40 | 2019-03-25 10:47:44 | 0:22:04 | 0:09:49 | 0:12:15 | smithi | master | ubuntu | 18.04 | rados/perf/{ceph.yaml objectstore/bluestore-bitmap.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{ubuntu_latest.yaml} workloads/sample_radosbench.yaml} | 1 | |
Failure Reason:
"2019-03-25 10:41:13.751529 mon.a (mon.0) 80 : cluster [WRN] Health check failed: 1 pool(s) full (POOL_FULL)" in cluster log |
||||||||||||||
fail | 3771137 | 2019-03-25 09:58:01 | 2019-03-25 10:25:42 | 2019-03-25 10:51:41 | 0:25:59 | 0:07:15 | 0:18:44 | smithi | master | ubuntu | 16.04 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/osd-delay.yaml msgr/async.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{ubuntu_16.04.yaml} thrashers/none.yaml thrashosds-health.yaml workloads/redirect_promote_tests.yaml} | 2 | |
Failure Reason:
Command crashed: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --set_redirect --low_tier_pool low_tier --max-ops 4000 --objects 500 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op read 50 --op write 50 --op write_excl 50 --op delete 10 --pool unique_pool_0' |
||||||||||||||
fail | 3771138 | 2019-03-25 09:58:02 | 2019-03-25 10:25:55 | 2019-03-25 10:49:59 | 0:24:04 | 0:17:01 | 0:07:03 | smithi | master | rhel | 7.5 | rados/singleton/{all/max-pg-per-osd.from-primary.yaml msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{rhel_latest.yaml}} | 1 | |
Failure Reason:
"2019-03-25 10:43:36.361815 mon.a (mon.0) 113 : cluster [WRN] Health check failed: 1 pool(s) full (POOL_FULL)" in cluster log |
||||||||||||||
fail | 3771139 | 2019-03-25 09:58:03 | 2019-03-25 10:27:09 | 2019-03-25 10:51:08 | 0:23:59 | 0:13:46 | 0:10:13 | smithi | master | rhel | 7.5 | rados/monthrash/{ceph.yaml clusters/3-mons.yaml msgr-failures/mon-delay.yaml msgr/async-v1only.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{rhel_latest.yaml} thrashers/many.yaml workloads/rados_5925.yaml} | 2 | |
Failure Reason:
"2019-03-25 10:46:50.219221 mon.a (mon.0) 88 : cluster [WRN] Health check failed: 1 pool(s) full (POOL_FULL)" in cluster log |
||||||||||||||
fail | 3771140 | 2019-03-25 09:58:04 | 2019-03-25 10:28:21 | 2019-03-25 10:52:21 | 0:24:00 | 0:12:31 | 0:11:29 | smithi | master | centos | 7.5 | rados/thrash-erasure-code-isa/{arch/x86_64.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/osd-delay.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml recovery-overrides/{default.yaml} supported-random-distro$/{centos_latest.yaml} thrashers/morepggrow.yaml thrashosds-health.yaml workloads/ec-rados-plugin=isa-k=2-m=1.yaml} | 2 | |
Failure Reason:
Command crashed: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --no-omap --ec-pool --max-ops 4000 --objects 50 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op snap_remove 50 --op snap_create 50 --op rollback 50 --op setattr 25 --op read 100 --op copy_from 50 --op write 0 --op rmattr 25 --op append 100 --op delete 50 --pool unique_pool_0' |
||||||||||||||
fail | 3771141 | 2019-03-25 09:58:04 | 2019-03-25 10:29:42 | 2019-03-25 11:03:42 | 0:34:00 | 0:08:16 | 0:25:44 | smithi | master | ubuntu | 16.04 | rados/thrash-erasure-code-overwrites/{bluestore-bitmap.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/normal.yaml msgr-failures/few.yaml rados.yaml recovery-overrides/{default.yaml} supported-random-distro$/{ubuntu_16.04.yaml} thrashers/morepggrow.yaml thrashosds-health.yaml workloads/ec-snaps-few-objects-overwrites.yaml} | 2 | |
Failure Reason:
Command crashed: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --no-omap --max-ops 4000 --objects 50 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op snap_remove 50 --op snap_create 50 --op rollback 50 --op read 100 --op copy_from 50 --op write 50 --op write_excl 50 --op delete 50 --pool unique_pool_0' |
||||||||||||||
dead | 3771142 | 2019-03-25 09:58:05 | 2019-03-25 10:29:51 | 2019-03-25 11:17:51 | 0:48:00 | smithi | master | ubuntu | 18.04 | rados/thrash-erasure-code-shec/{ceph.yaml clusters/{fixed-4.yaml openstack.yaml} msgr-failures/few.yaml objectstore/bluestore-stupid.yaml rados.yaml recovery-overrides/{more-active-recovery.yaml} supported-random-distro$/{ubuntu_latest.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/ec-rados-plugin=shec-k=4-m=3-c=2.yaml} | — | |||
dead | 3771143 | 2019-03-25 09:58:06 | 2019-03-25 10:30:27 | 2019-03-25 11:18:28 | 0:48:01 | smithi | master | ubuntu | 16.04 | rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/many.yaml msgr/async.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{ubuntu_16.04.yaml} tasks/rados_api_tests.yaml} | 2 | |||
fail | 3771144 | 2019-03-25 09:58:07 | 2019-03-25 10:31:54 | 2019-03-25 11:11:54 | 0:40:00 | 0:29:00 | 0:11:00 | smithi | master | centos | 7.5 | rados/singleton-nomsgr/{all/librados_hello_world.yaml rados.yaml supported-random-distro$/{centos_latest.yaml}} | 1 | |
Failure Reason:
"2019-03-25 10:50:01.273927 mon.a (mon.0) 64 : cluster [WRN] Health check failed: 1 pool(s) full (POOL_FULL)" in cluster log |
||||||||||||||
fail | 3771145 | 2019-03-25 09:58:07 | 2019-03-25 10:32:06 | 2019-03-25 10:54:05 | 0:21:59 | 0:12:06 | 0:09:53 | smithi | master | ubuntu | 18.04 | rados/singleton/{all/max-pg-per-osd.from-replica.yaml msgr-failures/many.yaml msgr/async-v2only.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml}} | 1 | |
Failure Reason:
"2019-03-25 10:47:01.293039 mon.a (mon.0) 106 : cluster [WRN] Health check failed: 1 pool(s) full (POOL_FULL)" in cluster log |
||||||||||||||
fail | 3771146 | 2019-03-25 09:58:08 | 2019-03-25 10:32:10 | 2019-03-25 11:08:09 | 0:35:59 | 0:12:48 | 0:23:11 | smithi | master | centos | 7.5 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/fastclose.yaml msgr/async-v1only.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{centos_latest.yaml} thrashers/pggrow.yaml thrashosds-health.yaml workloads/redirect_set_object.yaml} | 2 | |
Failure Reason:
Command crashed: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --set_redirect --low_tier_pool low_tier --max-ops 4000 --objects 500 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op read 100 --op write 50 --op copy_from 100 --op write_excl 50 --op delete 10 --pool unique_pool_0' |
||||||||||||||
fail | 3771147 | 2019-03-25 09:58:09 | 2019-03-25 10:34:21 | 2019-03-25 11:08:20 | 0:33:59 | 0:08:17 | 0:25:42 | smithi | master | centos | 7.5 | rados/thrash-erasure-code/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/normal.yaml msgr-failures/osd-delay.yaml objectstore/bluestore-stupid.yaml rados.yaml recovery-overrides/{default.yaml} supported-random-distro$/{centos_latest.yaml} thrashers/pggrow.yaml thrashosds-health.yaml workloads/ec-rados-plugin=clay-k=4-m=2.yaml} | 2 | |
Failure Reason:
Command failed on smithi107 with status 1: '\n sudo yum -y install ceph-radosgw\n ' |
||||||||||||||
dead | 3771148 | 2019-03-25 09:58:10 | 2019-03-25 10:34:21 | 2019-03-25 11:18:21 | 0:44:00 | smithi | master | centos | 7.5 | rados/mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/bluestore-comp.yaml supported-random-distro$/{centos_latest.yaml} tasks/crash.yaml} | 2 | |||
dead | 3771149 | 2019-03-25 09:58:11 | 2019-03-25 10:35:42 | 2019-03-25 11:17:42 | 0:42:00 | smithi | master | ubuntu | 16.04 | rados/perf/{ceph.yaml objectstore/bluestore-comp.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{ubuntu_16.04.yaml} workloads/cosbench_64K_read_write.yaml} | 1 | |||
fail | 3771150 | 2019-03-25 09:58:11 | 2019-03-25 10:37:38 | 2019-03-25 10:59:38 | 0:22:00 | 0:14:17 | 0:07:43 | smithi | master | rhel | 7.5 | rados/singleton/{all/mon-auth-caps.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{rhel_latest.yaml}} | 1 | |
Failure Reason:
"2019-03-25 10:54:12.334447 mon.a (mon.0) 82 : cluster [WRN] Health check failed: 1 pool(s) full (POOL_FULL)" in cluster log |
||||||||||||||
fail | 3771151 | 2019-03-25 09:58:12 | 2019-03-25 10:37:47 | 2019-03-25 11:07:46 | 0:29:59 | 0:14:30 | 0:15:29 | smithi | master | rhel | 7.5 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{rhel_latest.yaml} thrashers/careful.yaml thrashosds-health.yaml workloads/set-chunks-read.yaml} | 2 | |
Failure Reason:
Command crashed: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --set_chunk --low_tier_pool low_tier --max-ops 4000 --objects 300 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op read 100 --op write 50 --op write_excl 50 --op delete 10 --pool unique_pool_0' |
||||||||||||||
pass | 3771152 | 2019-03-25 09:58:13 | 2019-03-25 10:40:47 | 2019-03-25 10:56:46 | 0:15:59 | 0:06:38 | 0:09:21 | smithi | master | ubuntu | 18.04 | rados/objectstore/{backends/keyvaluedb.yaml supported-random-distro$/{ubuntu_latest.yaml}} | 1 | |
fail | 3771153 | 2019-03-25 09:58:14 | 2019-03-25 10:41:04 | 2019-03-25 11:13:03 | 0:31:59 | 0:15:23 | 0:16:36 | smithi | master | rhel | 7.5 | rados/thrash-erasure-code-big/{ceph.yaml cluster/{12-osds.yaml openstack.yaml} msgr-failures/osd-delay.yaml objectstore/bluestore-bitmap.yaml rados.yaml recovery-overrides/{default.yaml} supported-random-distro$/{rhel_latest.yaml} thrashers/morepggrow.yaml thrashosds-health.yaml workloads/ec-rados-plugin=lrc-k=4-m=2-l=3.yaml} | 3 | |
Failure Reason:
Command crashed: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --no-omap --ec-pool --max-ops 400 --objects 50 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op snap_remove 50 --op snap_create 50 --op rollback 50 --op setattr 25 --op read 100 --op copy_from 50 --op write 0 --op rmattr 25 --op append 100 --op delete 50 --pool unique_pool_0' |
||||||||||||||
fail | 3771154 | 2019-03-25 09:58:14 | 2019-03-25 10:41:30 | 2019-03-25 11:19:30 | 0:38:00 | smithi | master | ubuntu | 16.04 | rados/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-install/jewel-v1only.yaml backoff/peering_and_degraded.yaml ceph.yaml clusters/{openstack.yaml three-plus-one.yaml} d-balancer/off.yaml distro$/{ubuntu_16.04.yaml} msgr-failures/fastclose.yaml rados.yaml thrashers/morepggrow.yaml thrashosds-health.yaml workloads/snaps-few-objects.yaml} | 4 | |||
Failure Reason:
psutil.NoSuchProcess process no longer exists (pid=30724) |
||||||||||||||
fail | 3771155 | 2019-03-25 09:58:15 | 2019-03-25 10:41:39 | 2019-03-25 11:07:39 | 0:26:00 | 0:16:20 | 0:09:40 | smithi | master | centos | 7.5 | rados/singleton/{all/mon-config-key-caps.yaml msgr-failures/many.yaml msgr/async-v1only.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{centos_latest.yaml}} | 1 | |
Failure Reason:
"2019-03-25 10:58:03.142980 mon.a (mon.0) 75 : cluster [WRN] Health check failed: 1 pool(s) full (POOL_FULL)" in cluster log |
||||||||||||||
fail | 3771156 | 2019-03-25 09:58:16 | 2019-03-25 10:43:43 | 2019-03-25 11:07:42 | 0:23:59 | 0:14:01 | 0:09:58 | smithi | master | rhel | 7.5 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/osd-delay.yaml msgr/async.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{rhel_latest.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/small-objects.yaml} | 2 | |
Failure Reason:
Command crashed: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --max-ops 400000 --objects 1024 --max-in-flight 64 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 600 --op snap_remove 50 --op snap_create 50 --op rollback 50 --op setattr 25 --op read 100 --op copy_from 50 --op write 50 --op write_excl 50 --op rmattr 25 --op delete 50 --pool unique_pool_0' |
||||||||||||||
dead | 3771157 | 2019-03-25 09:58:17 | 2019-03-25 10:43:43 | 2019-03-25 11:17:43 | 0:34:00 | smithi | master | rhel | 7.5 | rados/multimon/{clusters/6.yaml msgr-failures/many.yaml msgr/async.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{rhel_latest.yaml} tasks/mon_recovery.yaml} | 2 | |||
fail | 3771158 | 2019-03-25 09:58:17 | 2019-03-25 10:45:00 | 2019-03-25 11:09:00 | 0:24:00 | 0:11:23 | 0:12:37 | smithi | master | rados/verify/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-thrash/default/{default.yaml thrashosds-health.yaml} msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore-bitmap.yaml rados.yaml tasks/rados_cls_all.yaml validater/lockdep.yaml} | 2 | |||
Failure Reason:
Command failed (workunit test cls/test_cls_lock.sh) on smithi005 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=51512c6307a29c298db33e79dee5de0af732b3d3 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_lock.sh' |
||||||||||||||
fail | 3771159 | 2019-03-25 09:58:18 | 2019-03-25 10:46:30 | 2019-03-25 11:10:29 | 0:23:59 | 0:15:45 | 0:08:14 | smithi | master | rhel | 7.5 | rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{rhel_latest.yaml} tasks/rados_cls_all.yaml} | 2 | |
Failure Reason:
Command failed (workunit test cls/test_cls_lock.sh) on smithi172 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=51512c6307a29c298db33e79dee5de0af732b3d3 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_lock.sh' |
||||||||||||||
pass | 3771160 | 2019-03-25 09:58:19 | 2019-03-25 10:46:30 | 2019-03-25 11:16:29 | 0:29:59 | 0:20:25 | 0:09:34 | smithi | master | centos | 7.5 | rados/singleton-nomsgr/{all/msgr.yaml rados.yaml supported-random-distro$/{centos_latest.yaml}} | 1 | |
dead | 3771161 | 2019-03-25 09:58:20 | 2019-03-25 10:46:36 | 2019-03-25 11:18:41 | 0:32:05 | smithi | master | ubuntu | 16.04 | rados/perf/{ceph.yaml objectstore/bluestore-low-osd-mem-target.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{ubuntu_16.04.yaml} workloads/cosbench_64K_write.yaml} | 1 | |||
fail | 3771162 | 2019-03-25 09:58:21 | 2019-03-25 10:46:37 | 2019-03-25 11:12:37 | 0:26:00 | 0:19:19 | 0:06:41 | smithi | master | rhel | 7.5 | rados/singleton/{all/mon-config-keys.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{rhel_latest.yaml}} | 1 | |
Failure Reason:
"2019-03-25 11:03:14.756479 mon.a (mon.0) 79 : cluster [WRN] Health check failed: 1 pool(s) full (POOL_FULL)" in cluster log |
||||||||||||||
dead | 3771163 | 2019-03-25 09:58:21 | 2019-03-25 10:46:46 | 2019-03-25 11:18:46 | 0:32:00 | smithi | master | ubuntu | 18.04 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/fastclose.yaml msgr/async-v1only.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} thrashers/mapgap.yaml thrashosds-health.yaml workloads/snaps-few-objects.yaml} | 2 | |||
dead | 3771164 | 2019-03-25 09:58:22 | 2019-03-25 10:47:46 | 2019-03-25 11:17:46 | 0:30:00 | smithi | master | ubuntu | 16.04 | rados/standalone/{supported-random-distro$/{ubuntu_16.04.yaml} workloads/osd.yaml} | 1 | |||
fail | 3771165 | 2019-03-25 09:58:23 | 2019-03-25 10:47:46 | 2019-03-25 11:11:51 | 0:24:05 | 0:07:32 | 0:16:33 | smithi | master | centos | 7.5 | rados/mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/bluestore-low-osd-mem-target.yaml supported-random-distro$/{centos_latest.yaml} tasks/dashboard.yaml} | 2 | |
Failure Reason:
Command failed on smithi183 with status 1: '\n sudo yum -y install ceph-radosgw\n ' |
||||||||||||||
fail | 3771166 | 2019-03-25 09:58:24 | 2019-03-25 10:49:14 | 2019-03-25 11:11:14 | 0:22:00 | 0:11:52 | 0:10:08 | smithi | master | centos | 7.5 | rados/singleton/{all/mon-config.yaml msgr-failures/many.yaml msgr/async.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{centos_latest.yaml}} | 1 | |
Failure Reason:
"2019-03-25 11:06:09.083913 mon.a (mon.0) 60 : cluster [WRN] Health check failed: 1 pool(s) full (POOL_FULL)" in cluster log |
||||||||||||||
dead | 3771167 | 2019-03-25 09:58:24 | 2019-03-25 10:49:42 | 2019-03-25 11:17:42 | 0:28:00 | smithi | master | centos | 7.5 | rados/monthrash/{ceph.yaml clusters/9-mons.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{centos_latest.yaml} thrashers/one.yaml workloads/rados_api_tests.yaml} | 2 | |||
fail | 3771168 | 2019-03-25 09:58:25 | 2019-03-25 10:49:42 | 2019-03-25 11:11:42 | 0:22:00 | 0:08:34 | 0:13:26 | smithi | master | ubuntu | 16.04 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{ubuntu_16.04.yaml} thrashers/morepggrow.yaml thrashosds-health.yaml workloads/write_fadvise_dontneed.yaml} | 2 | |
Failure Reason:
Command crashed: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --write-fadvise-dontneed --max-ops 4000 --objects 500 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op read 100 --op write 50 --op write_excl 50 --op delete 10 --pool unique_pool_0' |
||||||||||||||
dead | 3771169 | 2019-03-25 09:58:26 | 2019-03-25 10:50:02 | 2019-03-25 11:18:02 | 0:28:00 | smithi | master | centos | 7.5 | rados/thrash-erasure-code-isa/{arch/x86_64.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/fastclose.yaml objectstore/bluestore-stupid.yaml rados.yaml recovery-overrides/{more-active-recovery.yaml} supported-random-distro$/{centos_latest.yaml} thrashers/none.yaml thrashosds-health.yaml workloads/ec-rados-plugin=isa-k=2-m=1.yaml} | 2 | |||
dead | 3771170 | 2019-03-25 09:58:26 | 2019-03-25 10:50:40 | 2019-03-25 11:18:40 | 0:28:00 | smithi | master | rhel | 7.5 | rados/singleton-bluestore/{all/cephtool.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{rhel_latest.yaml}} | 1 | |||
dead | 3771171 | 2019-03-25 09:58:27 | 2019-03-25 10:51:09 | 2019-03-25 11:19:09 | 0:28:00 | smithi | master | centos | 7.5 | rados/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-install/jewel.yaml backoff/normal.yaml ceph.yaml clusters/{openstack.yaml three-plus-one.yaml} d-balancer/crush-compat.yaml distro$/{centos_latest.yaml} msgr-failures/few.yaml rados.yaml thrashers/none.yaml thrashosds-health.yaml workloads/test_rbd_api.yaml} | — | |||
dead | 3771172 | 2019-03-25 09:58:28 | 2019-03-25 10:51:54 | 2019-03-25 11:17:54 | 0:26:00 | smithi | master | ubuntu | 16.04 | rados/thrash-erasure-code/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/fast.yaml msgr-failures/fastclose.yaml objectstore/filestore-xfs.yaml rados.yaml recovery-overrides/{default.yaml} supported-random-distro$/{ubuntu_16.04.yaml} thrashers/careful.yaml thrashosds-health.yaml workloads/ec-rados-plugin=jerasure-k=2-m=1.yaml} | 2 | |||
fail | 3771173 | 2019-03-25 09:58:29 | 2019-03-25 10:51:54 | 2019-03-25 11:15:54 | 0:24:00 | 0:12:56 | 0:11:04 | smithi | master | ubuntu | 16.04 | rados/perf/{ceph.yaml objectstore/bluestore-stupid.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{ubuntu_16.04.yaml} workloads/fio_4K_rand_read.yaml} | 1 | |
Failure Reason:
"2019-03-25 11:06:42.554348 mon.a (mon.0) 82 : cluster [WRN] Health check failed: 1 pool(s) full (POOL_FULL)" in cluster log |
||||||||||||||
dead | 3771174 | 2019-03-25 09:58:29 | 2019-03-25 10:52:22 | 2019-03-25 11:18:21 | 0:25:59 | 0:12:43 | 0:13:16 | smithi | master | centos | 7.5 | rados/singleton/{all/osd-backfill.yaml msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{centos_latest.yaml}} | 1 | |
Failure Reason:
"2019-03-25 11:13:54.343426 mon.a (mon.0) 69 : cluster [WRN] Health check failed: 1 pool(s) full (POOL_FULL)" in cluster log |
||||||||||||||
dead | 3771175 | 2019-03-25 09:58:30 | 2019-03-25 10:54:18 | 2019-03-25 11:18:17 | 0:23:59 | smithi | master | ubuntu | 18.04 | rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/many.yaml msgr/async-v2only.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} tasks/rados_python.yaml} | 2 | |||
dead | 3771176 | 2019-03-25 09:58:31 | 2019-03-25 10:56:30 | 2019-03-25 11:18:30 | 0:22:00 | smithi | master | ubuntu | 18.04 | rados/singleton-nomsgr/{all/multi-backfill-reject.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml}} | — | |||
dead | 3771177 | 2019-03-25 09:58:32 | 2019-03-25 10:56:30 | 2019-03-25 11:18:30 | 0:22:00 | smithi | master | ubuntu | 18.04 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/osd-delay.yaml msgr/async.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} thrashers/none.yaml thrashosds-health.yaml workloads/admin_socket_objecter_requests.yaml} | 2 | |||
dead | 3771178 | 2019-03-25 09:58:33 | 2019-03-25 10:56:47 | 2019-03-25 11:18:47 | 0:22:00 | smithi | master | ubuntu | 16.04 | rados/objectstore/{backends/objectcacher-stress.yaml supported-random-distro$/{ubuntu_16.04.yaml}} | 1 | |||
fail | 3771179 | 2019-03-25 09:58:33 | 2019-03-25 10:57:57 | 2019-03-25 11:15:57 | 0:18:00 | 0:08:18 | 0:09:42 | smithi | master | ubuntu | 16.04 | rados/thrash-erasure-code-overwrites/{bluestore-bitmap.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/fast.yaml msgr-failures/osd-delay.yaml rados.yaml recovery-overrides/{default.yaml} supported-random-distro$/{ubuntu_16.04.yaml} thrashers/pggrow.yaml thrashosds-health.yaml workloads/ec-pool-snaps-few-objects-overwrites.yaml} | 2 | |
Failure Reason:
Command crashed: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --no-omap --pool-snaps --max-ops 4000 --objects 50 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op snap_remove 50 --op snap_create 50 --op rollback 50 --op read 100 --op copy_from 50 --op write 50 --op write_excl 50 --op delete 50 --pool unique_pool_0' |
||||||||||||||
dead | 3771180 | 2019-03-25 09:58:34 | 2019-03-25 10:58:06 | 2019-03-25 11:18:06 | 0:20:00 | smithi | master | rhel | 7.5 | rados/thrash-erasure-code-shec/{ceph.yaml clusters/{fixed-4.yaml openstack.yaml} msgr-failures/osd-delay.yaml objectstore/filestore-xfs.yaml rados.yaml recovery-overrides/{more-active-recovery.yaml} supported-random-distro$/{rhel_latest.yaml} thrashers/careful.yaml thrashosds-health.yaml workloads/ec-rados-plugin=shec-k=4-m=3-c=2.yaml} | — | |||
dead | 3771181 | 2019-03-25 09:58:35 | 2019-03-25 10:59:43 | 2019-03-25 11:17:43 | 0:18:00 | smithi | master | ubuntu | 16.04 | rados/singleton/{all/osd-recovery-incomplete.yaml msgr-failures/many.yaml msgr/async-v2only.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{ubuntu_16.04.yaml}} | 1 | |||
dead | 3771182 | 2019-03-25 09:58:36 | 2019-03-25 10:59:43 | 2019-03-25 11:17:43 | 0:18:00 | smithi | master | ubuntu | 18.04 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/fastclose.yaml msgr/async-v1only.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} thrashers/pggrow.yaml thrashosds-health.yaml workloads/cache-agent-big.yaml} | 2 | |||
dead | 3771183 | 2019-03-25 09:58:36 | 2019-03-25 11:00:05 | 2019-03-25 11:18:05 | 0:18:00 | smithi | master | rhel | 7.5 | rados/mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/bluestore-stupid.yaml supported-random-distro$/{rhel_latest.yaml} tasks/failover.yaml} | 2 | |||
dead | 3771184 | 2019-03-25 09:58:37 | 2019-03-25 11:01:29 | 2019-03-25 11:17:28 | 0:15:59 | smithi | master | ubuntu | 16.04 | rados/thrash-erasure-code-big/{ceph.yaml cluster/{12-osds.yaml openstack.yaml} msgr-failures/fastclose.yaml objectstore/bluestore-comp.yaml rados.yaml recovery-overrides/{default.yaml} supported-random-distro$/{ubuntu_16.04.yaml} thrashers/pggrow.yaml thrashosds-health.yaml workloads/ec-rados-plugin=jerasure-k=4-m=2.yaml} | — | |||
dead | 3771185 | 2019-03-25 09:58:38 | 2019-03-25 11:03:54 | 2019-03-25 11:17:54 | 0:14:00 | smithi | master | ubuntu | 18.04 | rados/singleton/{all/osd-recovery.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml}} | 1 | |||
dead | 3771186 | 2019-03-25 09:58:39 | 2019-03-25 11:05:39 | 2019-03-25 11:17:38 | 0:11:59 | smithi | master | rhel | 7.5 | rados/perf/{ceph.yaml objectstore/filestore-xfs.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{rhel_latest.yaml} workloads/fio_4K_rand_rw.yaml} | 1 | |||
dead | 3771187 | 2019-03-25 09:58:39 | 2019-03-25 11:07:13 | 2019-03-25 11:19:12 | 0:11:59 | smithi | master | rhel | 7.5 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{rhel_latest.yaml} thrashers/careful.yaml thrashosds-health.yaml workloads/cache-agent-small.yaml} | 2 | |||
dead | 3771188 | 2019-03-25 09:58:40 | 2019-03-25 11:07:40 | 2019-03-25 11:17:39 | 0:09:59 | smithi | master | ubuntu | 16.04 | rados/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-install/luminous-v1only.yaml backoff/peering.yaml ceph.yaml clusters/{openstack.yaml three-plus-one.yaml} d-balancer/off.yaml distro$/{ubuntu_16.04.yaml} msgr-failures/osd-delay.yaml rados.yaml thrashers/pggrow.yaml thrashosds-health.yaml workloads/cache-snaps.yaml} | — | |||
dead | 3771189 | 2019-03-25 09:58:41 | 2019-03-25 11:07:44 | 2019-03-25 11:17:43 | 0:09:59 | smithi | master | ubuntu | 18.04 | rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} tasks/rados_stress_watch.yaml} | 2 | |||
dead | 3771190 | 2019-03-25 09:58:42 | 2019-03-25 11:07:47 | 2019-03-25 11:17:47 | 0:10:00 | smithi | master | ubuntu | 16.04 | rados/singleton/{all/peer.yaml msgr-failures/many.yaml msgr/async-v1only.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{ubuntu_16.04.yaml}} | 1 | |||
dead | 3771191 | 2019-03-25 09:58:42 | 2019-03-25 11:08:06 | 2019-03-25 11:18:05 | 0:09:59 | smithi | master | ubuntu | 18.04 | rados/singleton-nomsgr/{all/pool-access.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml}} | 1 | |||
dead | 3771192 | 2019-03-25 09:58:43 | 2019-03-25 11:08:10 | 2019-03-25 11:18:10 | 0:10:00 | smithi | master | ubuntu | 16.04 | rados/multimon/{clusters/9.yaml msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{ubuntu_16.04.yaml} tasks/mon_clock_no_skews.yaml} | 3 | |||
dead | 3771193 | 2019-03-25 09:58:44 | 2019-03-25 11:08:33 | 2019-03-25 11:18:33 | 0:10:00 | smithi | master | centos | rados/verify/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-thrash/none.yaml msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/bluestore-comp.yaml rados.yaml tasks/mon_recovery.yaml validater/valgrind.yaml} | 2 | ||||
dead | 3771194 | 2019-03-25 09:58:45 | 2019-03-25 11:09:01 | 2019-03-25 11:19:00 | 0:09:59 | smithi | master | centos | 7.5 | rados/standalone/{supported-random-distro$/{centos_latest.yaml} workloads/scrub.yaml} | 1 | |||
dead | 3771195 | 2019-03-25 09:58:46 | 2019-03-25 11:09:54 | 2019-03-25 11:17:54 | 0:08:00 | smithi | master | ubuntu | 16.04 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/osd-delay.yaml msgr/async.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{ubuntu_16.04.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/cache-pool-snaps-readproxy.yaml} | 2 | |||
dead | 3771196 | 2019-03-25 09:58:46 | 2019-03-25 11:10:30 | 2019-03-25 11:18:29 | 0:07:59 | smithi | master | ubuntu | 18.04 | rados/singleton/{all/pg-autoscaler.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml}} | — | |||
dead | 3771197 | 2019-03-25 09:58:47 | 2019-03-25 11:10:53 | 2019-03-25 11:18:52 | 0:07:59 | smithi | master | ubuntu | 18.04 | rados/perf/{ceph.yaml objectstore/bluestore-bitmap.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{ubuntu_latest.yaml} workloads/fio_4M_rand_read.yaml} | 1 | |||
fail | 3771198 | 2019-03-25 09:58:48 | 2019-03-25 11:11:26 | 2019-03-25 11:23:25 | 0:11:59 | smithi | master | ubuntu | 16.04 | rados/thrash-erasure-code-isa/{arch/x86_64.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/few.yaml objectstore/filestore-xfs.yaml rados.yaml recovery-overrides/{more-active-recovery.yaml} supported-random-distro$/{ubuntu_16.04.yaml} thrashers/pggrow.yaml thrashosds-health.yaml workloads/ec-rados-plugin=isa-k=2-m=1.yaml} | — | |||
Failure Reason:
machine smithi095.front.sepia.ceph.com is locked by scheduled_teuthology@teuthology, not scheduled_sage@teuthology |