User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Pass | Fail |
---|---|---|---|---|---|---|---|---|---|---|
sage | 2017-12-20 03:23:01 | 2017-12-20 03:23:26 | 2017-12-20 06:55:30 | 3:32:04 | rados | mimic-dev1 | ovh | 2f7765a | 3 | 106 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
fail | 1983701 | 2017-12-20 03:23:14 | 2017-12-20 03:23:26 | 2017-12-20 03:51:25 | 0:27:59 | 0:16:46 | 0:11:13 | ovh | master | rados/thrash/{0-size-min-size-overrides/2-size-1-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/peering_and_degraded.yaml clusters/{fixed-2.yaml openstack.yaml} fs/xfs.yaml msgr-failures/osd-delay.yaml msgr/simple.yaml objectstore/bluestore.yaml rados.yaml rocksdb.yaml thrashers/morepggrow.yaml workloads/pool-snaps-few-objects.yaml} | 2 | |||
Failure Reason:
Command failed on ovh057 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph pg dump --format=json' |
||||||||||||||
fail | 1983702 | 2017-12-20 03:23:14 | 2017-12-20 03:23:26 | 2017-12-20 04:33:26 | 1:10:00 | 0:50:56 | 0:19:04 | ovh | master | rados/basic/{clusters/{fixed-2.yaml openstack.yaml} fs/btrfs.yaml mon_kv_backend/rocksdb.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore.yaml rados.yaml tasks/rados_stress_watch.yaml} | 2 | |||
Failure Reason:
"2017-12-20 04:00:28.990746 mon.b mon.0 158.69.87.83:6789/0 73 : cluster [ERR] Health check failed: no active mgr (MGR_DOWN)" in cluster log |
||||||||||||||
fail | 1983703 | 2017-12-20 03:23:15 | 2017-12-20 03:23:26 | 2017-12-20 06:55:30 | 3:32:04 | 3:19:24 | 0:12:40 | ovh | master | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/normal.yaml clusters/{fixed-2.yaml openstack.yaml} fs/btrfs.yaml msgr-failures/fastclose.yaml msgr/async.yaml objectstore/filestore.yaml rados.yaml rocksdb.yaml thrashers/none.yaml workloads/rados_api_tests.yaml} | 2 | |||
Failure Reason:
Command failed (workunit test rados/test.sh) on ovh007 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=master TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test.sh' |
||||||||||||||
fail | 1983704 | 2017-12-20 03:23:16 | 2017-12-20 03:23:26 | 2017-12-20 03:45:25 | 0:21:59 | 0:15:39 | 0:06:20 | ovh | master | rados/singleton/{all/reg11184.yaml fs/xfs.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/filestore.yaml rados.yaml} | 1 | |||
Failure Reason:
Command failed on ovh054 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph pg dump --format=json' |
||||||||||||||
fail | 1983705 | 2017-12-20 03:23:17 | 2017-12-20 03:23:26 | 2017-12-20 03:57:26 | 0:34:00 | 0:16:49 | 0:17:11 | ovh | master | rados/objectstore/filejournal.yaml | 1 | |||
Failure Reason:
"2017-12-20 03:54:28.894700 mon.0 mon.0 158.69.87.40:6789/0 35 : cluster [ERR] Health check failed: no active mgr (MGR_DOWN)" in cluster log |
||||||||||||||
fail | 1983706 | 2017-12-20 03:23:17 | 2017-12-20 03:23:26 | 2017-12-20 04:05:25 | 0:41:59 | 0:22:57 | 0:19:02 | ovh | master | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/peering.yaml clusters/{fixed-2.yaml openstack.yaml} fs/xfs.yaml msgr-failures/few.yaml msgr/random.yaml objectstore/bluestore.yaml rados.yaml rocksdb.yaml thrashers/pggrow.yaml workloads/radosbench.yaml} | 2 | |||
Failure Reason:
Command failed on ovh092 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph pg dump --format=json' |
||||||||||||||
fail | 1983707 | 2017-12-20 03:23:18 | 2017-12-20 03:23:26 | 2017-12-20 04:05:26 | 0:42:00 | 0:22:26 | 0:19:34 | ovh | master | rados/thrash/{0-size-min-size-overrides/2-size-1-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/peering_and_degraded.yaml clusters/{fixed-2.yaml openstack.yaml} fs/btrfs.yaml msgr-failures/osd-delay.yaml msgr/simple.yaml objectstore/filestore.yaml rados.yaml rocksdb.yaml thrashers/default.yaml workloads/small-objects.yaml} | 2 | |||
Failure Reason:
Command failed on ovh013 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph pg dump --format=json' |
||||||||||||||
fail | 1983708 | 2017-12-20 03:23:19 | 2017-12-20 03:23:27 | 2017-12-20 03:51:26 | 0:27:59 | 0:17:54 | 0:10:05 | ovh | master | rados/monthrash/{ceph/ceph.yaml clusters/9-mons.yaml fs/xfs.yaml mon_kv_backend/rocksdb.yaml msgr-failures/mon-delay.yaml msgr/random.yaml objectstore/bluestore.yaml rados.yaml thrashers/one.yaml workloads/rados_mon_workunits.yaml} | 2 | |||
Failure Reason:
Command failed on ovh095 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph pg dump --format=json' |
||||||||||||||
fail | 1983709 | 2017-12-20 03:23:20 | 2017-12-20 03:23:27 | 2017-12-20 04:01:26 | 0:37:59 | 0:17:08 | 0:20:51 | ovh | master | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/normal.yaml clusters/{fixed-2.yaml openstack.yaml} fs/xfs.yaml msgr-failures/fastclose.yaml msgr/async.yaml objectstore/bluestore.yaml rados.yaml rocksdb.yaml thrashers/mapgap.yaml workloads/snaps-few-objects.yaml} | 2 | |||
Failure Reason:
Command failed on ovh062 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph pg dump --format=json' |
||||||||||||||
fail | 1983710 | 2017-12-20 03:23:21 | 2017-12-20 03:23:26 | 2017-12-20 04:11:26 | 0:48:00 | 0:18:42 | 0:29:18 | ovh | master | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/peering.yaml clusters/{fixed-2.yaml openstack.yaml} fs/btrfs.yaml msgr-failures/few.yaml msgr/random.yaml objectstore/filestore.yaml rados.yaml rocksdb.yaml thrashers/morepggrow.yaml workloads/write_fadvise_dontneed.yaml} | 2 | |||
Failure Reason:
Command failed on ovh054 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph pg dump --format=json' |
||||||||||||||
fail | 1983711 | 2017-12-20 03:23:22 | 2017-12-20 03:23:26 | 2017-12-20 04:03:25 | 0:39:59 | 0:21:25 | 0:18:34 | ovh | master | rados/singleton/{all/resolve_stuck_peering.yaml fs/xfs.yaml msgr-failures/many.yaml msgr/random.yaml objectstore/bluestore.yaml rados.yaml} | 2 | |||
Failure Reason:
Command failed on ovh073 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph pg dump --format=json' |
||||||||||||||
fail | 1983712 | 2017-12-20 03:23:22 | 2017-12-20 03:23:26 | 2017-12-20 03:53:25 | 0:29:59 | 0:18:07 | 0:11:52 | ovh | master | rados/thrash/{0-size-min-size-overrides/2-size-1-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/peering_and_degraded.yaml clusters/{fixed-2.yaml openstack.yaml} fs/xfs.yaml msgr-failures/osd-delay.yaml msgr/simple.yaml objectstore/bluestore.yaml rados.yaml rocksdb.yaml thrashers/none.yaml workloads/admin_socket_objecter_requests.yaml} | 2 | |||
Failure Reason:
"2017-12-20 03:46:57.459123 mon.b mon.0 158.69.87.20:6789/0 68 : cluster [ERR] Health check failed: no active mgr (MGR_DOWN)" in cluster log |
||||||||||||||
fail | 1983713 | 2017-12-20 03:23:23 | 2017-12-20 03:23:27 | 2017-12-20 04:05:26 | 0:41:59 | 0:22:44 | 0:19:15 | ovh | master | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/normal.yaml clusters/{fixed-2.yaml openstack.yaml} fs/btrfs.yaml msgr-failures/fastclose.yaml msgr/async.yaml objectstore/filestore.yaml rados.yaml rocksdb.yaml thrashers/pggrow.yaml workloads/cache-agent-big.yaml} | 2 | |||
Failure Reason:
Command failed on ovh020 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph pg dump --format=json' |
||||||||||||||
fail | 1983714 | 2017-12-20 03:23:24 | 2017-12-20 03:23:27 | 2017-12-20 03:57:27 | 0:34:00 | 0:17:40 | 0:16:20 | ovh | master | rados/thrash-erasure-code-big/{cluster/{12-osds.yaml openstack.yaml} fs/xfs.yaml leveldb.yaml msgr-failures/fastclose.yaml objectstore/bluestore.yaml rados.yaml thrashers/mapgap.yaml workloads/ec-rados-plugin=lrc-k=4-m=2-l=3.yaml} | 4 | |||
Failure Reason:
Command failed on ovh039 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph pg dump --format=json' |
||||||||||||||
fail | 1983715 | 2017-12-20 03:23:25 | 2017-12-20 03:23:27 | 2017-12-20 03:53:26 | 0:29:59 | 0:19:07 | 0:10:52 | ovh | master | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/peering.yaml clusters/{fixed-2.yaml openstack.yaml} fs/xfs.yaml msgr-failures/few.yaml msgr/random.yaml objectstore/bluestore.yaml rados.yaml rocksdb.yaml thrashers/default.yaml workloads/cache-agent-small.yaml} | 2 | |||
Failure Reason:
Command failed on ovh017 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph pg dump --format=json' |
||||||||||||||
fail | 1983716 | 2017-12-20 03:23:26 | 2017-12-20 03:23:27 | 2017-12-20 03:53:27 | 0:30:00 | 0:18:26 | 0:11:34 | ovh | master | rados/thrash/{0-size-min-size-overrides/2-size-1-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/peering_and_degraded.yaml clusters/{fixed-2.yaml openstack.yaml} fs/btrfs.yaml msgr-failures/osd-delay.yaml msgr/simple.yaml objectstore/filestore.yaml rados.yaml rocksdb.yaml thrashers/mapgap.yaml workloads/cache-pool-snaps-readproxy.yaml} | 2 | |||
Failure Reason:
Command failed on ovh083 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph pg dump --format=json' |
||||||||||||||
fail | 1983717 | 2017-12-20 03:23:27 | 2017-12-20 03:23:28 | 2017-12-20 03:51:29 | 0:28:01 | 0:15:58 | 0:12:03 | ovh | master | rados/basic/{clusters/{fixed-2.yaml openstack.yaml} fs/xfs.yaml mon_kv_backend/leveldb.yaml msgr-failures/many.yaml msgr/random.yaml objectstore/filestore.yaml rados.yaml tasks/rados_striper.yaml} | 2 | |||
Failure Reason:
Command crashed: 'sudo TESTDIR=/home/ubuntu/cephtest bash -c ceph_test_rados_striper_api_aio' |
||||||||||||||
fail | 1983718 | 2017-12-20 03:23:28 | 2017-12-20 03:23:29 | 2017-12-20 04:03:29 | 0:40:00 | 0:20:49 | 0:19:11 | ovh | master | rados/multimon/{clusters/6.yaml fs/xfs.yaml mon_kv_backend/rocksdb.yaml msgr-failures/many.yaml msgr/async.yaml objectstore/bluestore.yaml rados.yaml tasks/mon_clock_no_skews.yaml} | 2 | |||
Failure Reason:
'timechecks' |
||||||||||||||
fail | 1983719 | 2017-12-20 03:23:29 | 2017-12-20 03:23:30 | 2017-12-20 04:11:30 | 0:48:00 | 0:17:29 | 0:30:31 | ovh | master | rados/thrash-erasure-code/{clusters/{fixed-2.yaml openstack.yaml} fast/fast.yaml fs/btrfs.yaml leveldb.yaml msgr-failures/few.yaml objectstore/filestore.yaml rados.yaml thrashers/default.yaml workloads/ec-small-objects-overwrites.yaml} | 2 | |||
Failure Reason:
Command failed on ovh053 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph pg dump --format=json' |
||||||||||||||
fail | 1983720 | 2017-12-20 03:23:29 | 2017-12-20 03:23:30 | 2017-12-20 03:57:30 | 0:34:00 | 0:23:50 | 0:10:10 | ovh | master | centos | rados/verify/{1thrash/default.yaml clusters/{fixed-2.yaml openstack.yaml} fs/btrfs.yaml mon_kv_backend/rocksdb.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/filestore.yaml rados.yaml tasks/rados_cls_all.yaml validater/valgrind.yaml} | 2 | ||
Failure Reason:
Command failed on ovh052 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph pg dump --format=json' |
||||||||||||||
fail | 1983721 | 2017-12-20 03:23:30 | 2017-12-20 03:23:31 | 2017-12-20 03:55:31 | 0:32:00 | 0:16:26 | 0:15:34 | ovh | master | rados/singleton-nomsgr/{all/export-after-evict.yaml rados.yaml} | 1 | |||
Failure Reason:
"2017-12-20 03:52:44.168961 mon.a mon.0 158.69.87.37:6789/0 62 : cluster [ERR] Health check failed: no active mgr (MGR_DOWN)" in cluster log |
||||||||||||||
fail | 1983722 | 2017-12-20 03:23:31 | 2017-12-20 03:23:32 | 2017-12-20 03:49:31 | 0:25:59 | 0:15:32 | 0:10:27 | ovh | master | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/normal.yaml clusters/{fixed-2.yaml openstack.yaml} fs/xfs.yaml msgr-failures/fastclose.yaml msgr/async.yaml objectstore/bluestore.yaml rados.yaml rocksdb.yaml thrashers/morepggrow.yaml workloads/cache-pool-snaps.yaml} | 2 | |||
Failure Reason:
Command failed on ovh053 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph pg dump --format=json' |
||||||||||||||
fail | 1983723 | 2017-12-20 03:23:31 | 2017-12-20 03:23:32 | 2017-12-20 03:57:32 | 0:34:00 | 0:18:13 | 0:15:47 | ovh | master | rados/singleton/{all/rest-api.yaml fs/xfs.yaml msgr-failures/few.yaml msgr/simple.yaml objectstore/filestore.yaml rados.yaml} | 1 | |||
Failure Reason:
Command failed (workunit test rest/test.py) on ovh024 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=master TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rest/test.py' |
||||||||||||||
fail | 1983724 | 2017-12-20 03:23:32 | 2017-12-20 03:23:33 | 2017-12-20 04:13:33 | 0:50:00 | 0:38:44 | 0:11:16 | ovh | master | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/peering.yaml clusters/{fixed-2.yaml openstack.yaml} fs/btrfs.yaml msgr-failures/few.yaml msgr/random.yaml objectstore/filestore.yaml rados.yaml rocksdb.yaml thrashers/none.yaml workloads/cache-snaps.yaml} | 2 | |||
Failure Reason:
"2017-12-20 03:48:52.372546 mon.a mon.0 158.69.87.192:6789/0 103 : cluster [ERR] Health check failed: no active mgr (MGR_DOWN)" in cluster log |
||||||||||||||
fail | 1983725 | 2017-12-20 03:23:33 | 2017-12-20 03:23:34 | 2017-12-20 04:05:34 | 0:42:00 | 0:23:04 | 0:18:56 | ovh | master | rados/thrash/{0-size-min-size-overrides/2-size-1-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/peering_and_degraded.yaml clusters/{fixed-2.yaml openstack.yaml} fs/xfs.yaml msgr-failures/osd-delay.yaml msgr/simple.yaml objectstore/bluestore.yaml rados.yaml rocksdb.yaml thrashers/pggrow.yaml workloads/cache.yaml} | 2 | |||
Failure Reason:
Command failed on ovh098 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph pg dump --format=json' |
||||||||||||||
fail | 1983726 | 2017-12-20 03:23:33 | 2017-12-20 03:23:34 | 2017-12-20 04:05:34 | 0:42:00 | 0:23:08 | 0:18:52 | ovh | master | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/normal.yaml clusters/{fixed-2.yaml openstack.yaml} fs/btrfs.yaml msgr-failures/fastclose.yaml msgr/async.yaml objectstore/filestore.yaml rados.yaml rocksdb.yaml thrashers/default.yaml workloads/pool-snaps-few-objects.yaml} | 2 | |||
Failure Reason:
Command failed on ovh036 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph pg dump --format=json' |
||||||||||||||
fail | 1983727 | 2017-12-20 03:23:34 | 2017-12-20 03:23:35 | 2017-12-20 03:53:35 | 0:30:00 | 0:18:01 | 0:11:59 | ovh | master | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/peering.yaml clusters/{fixed-2.yaml openstack.yaml} fs/xfs.yaml msgr-failures/few.yaml msgr/random.yaml objectstore/bluestore.yaml rados.yaml rocksdb.yaml thrashers/mapgap.yaml workloads/rados_api_tests.yaml} | 2 | |||
Failure Reason:
Command failed on ovh026 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph pg dump --format=json' |
||||||||||||||
fail | 1983728 | 2017-12-20 03:23:35 | 2017-12-20 03:23:36 | 2017-12-20 04:31:37 | 1:08:01 | 0:51:28 | 0:16:33 | ovh | master | rados/singleton/{all/test_envlibrados_for_rocksdb.yaml fs/xfs.yaml msgr-failures/many.yaml msgr/async.yaml objectstore/bluestore.yaml rados.yaml} | 1 | |||
Failure Reason:
"2017-12-20 03:52:41.910486 mon.a mon.0 158.69.87.38:6789/0 43 : cluster [ERR] Health check failed: no active mgr (MGR_DOWN)" in cluster log |
||||||||||||||
fail | 1983729 | 2017-12-20 03:23:35 | 2017-12-20 03:23:37 | 2017-12-20 03:53:36 | 0:29:59 | 0:17:34 | 0:12:25 | ovh | master | rados/thrash/{0-size-min-size-overrides/2-size-1-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/peering_and_degraded.yaml clusters/{fixed-2.yaml openstack.yaml} fs/btrfs.yaml msgr-failures/osd-delay.yaml msgr/simple.yaml objectstore/filestore.yaml rados.yaml rocksdb.yaml thrashers/morepggrow.yaml workloads/radosbench.yaml} | 2 | |||
Failure Reason:
Command failed on ovh040 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph pg dump --format=json' |
||||||||||||||
fail | 1983730 | 2017-12-20 03:23:36 | 2017-12-20 03:23:37 | 2017-12-20 04:11:38 | 0:48:01 | 0:40:39 | 0:07:22 | ovh | master | rados/objectstore/filestore-idempotent-aio-journal.yaml | 1 | |||
Failure Reason:
"2017-12-20 03:43:19.660719 mon.0 mon.0 158.69.87.174:6789/0 35 : cluster [ERR] Health check failed: no active mgr (MGR_DOWN)" in cluster log |
||||||||||||||
fail | 1983731 | 2017-12-20 03:23:37 | 2017-12-20 03:23:38 | 2017-12-20 03:59:38 | 0:36:00 | 0:18:18 | 0:17:42 | ovh | master | rados/thrash-erasure-code-shec/{clusters/{fixed-4.yaml openstack.yaml} fs/xfs.yaml leveldb.yaml msgr-failures/osd-delay.yaml objectstore/bluestore.yaml rados.yaml thrashers/default.yaml workloads/ec-rados-plugin=shec-k=4-m=3-c=2.yaml} | 4 | |||
Failure Reason:
Command failed on ovh003 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph pg dump --format=json' |
||||||||||||||
fail | 1983732 | 2017-12-20 03:23:38 | 2017-12-20 03:23:38 | 2017-12-20 04:23:42 | 1:00:04 | 0:28:51 | 0:31:13 | ovh | master | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/normal.yaml clusters/{fixed-2.yaml openstack.yaml} fs/xfs.yaml msgr-failures/fastclose.yaml msgr/async.yaml objectstore/bluestore.yaml rados.yaml rocksdb.yaml thrashers/none.yaml workloads/small-objects.yaml} | 2 | |||
Failure Reason:
Command failed on ovh018 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph pg dump --format=json' |
||||||||||||||
fail | 1983733 | 2017-12-20 03:23:38 | 2017-12-20 03:23:39 | 2017-12-20 03:53:39 | 0:30:00 | 0:18:58 | 0:11:02 | ovh | master | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/peering.yaml clusters/{fixed-2.yaml openstack.yaml} fs/btrfs.yaml msgr-failures/few.yaml msgr/random.yaml objectstore/filestore.yaml rados.yaml rocksdb.yaml thrashers/pggrow.yaml workloads/snaps-few-objects.yaml} | 2 | |||
Failure Reason:
Command failed on ovh015 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph pg dump --format=json' |
||||||||||||||
fail | 1983734 | 2017-12-20 03:23:39 | 2017-12-20 03:23:40 | 2017-12-20 04:05:41 | 0:42:01 | 0:22:42 | 0:19:19 | ovh | master | rados/thrash/{0-size-min-size-overrides/2-size-1-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/peering_and_degraded.yaml clusters/{fixed-2.yaml openstack.yaml} fs/xfs.yaml msgr-failures/osd-delay.yaml msgr/simple.yaml objectstore/bluestore.yaml rados.yaml rocksdb.yaml thrashers/default.yaml workloads/write_fadvise_dontneed.yaml} | 2 | |||
Failure Reason:
Command failed on ovh010 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph pg dump --format=json' |
||||||||||||||
fail | 1983735 | 2017-12-20 03:23:40 | 2017-12-20 03:23:40 | 2017-12-20 04:07:41 | 0:44:01 | 0:18:22 | 0:25:39 | ovh | master | rados/basic/{clusters/{fixed-2.yaml openstack.yaml} fs/btrfs.yaml mon_kv_backend/rocksdb.yaml msgr-failures/few.yaml msgr/simple.yaml objectstore/bluestore.yaml rados.yaml tasks/rados_workunit_loadgen_big.yaml} | 2 | |||
Failure Reason:
Command failed (workunit test rados/load-gen-big.sh) on ovh022 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && cd -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=master TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="1" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.1/qa/workunits/rados/load-gen-big.sh' |
||||||||||||||
fail | 1983736 | 2017-12-20 03:23:40 | 2017-12-20 03:45:42 | 2017-12-20 04:15:41 | 0:29:59 | 0:17:04 | 0:12:55 | ovh | master | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/normal.yaml clusters/{fixed-2.yaml openstack.yaml} fs/btrfs.yaml msgr-failures/fastclose.yaml msgr/async.yaml objectstore/filestore.yaml rados.yaml rocksdb.yaml thrashers/mapgap.yaml workloads/admin_socket_objecter_requests.yaml} | 2 | |||
Failure Reason:
Command failed on ovh095 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph pg dump --format=json' |
||||||||||||||
fail | 1983737 | 2017-12-20 03:23:41 | 2017-12-20 03:49:42 | 2017-12-20 04:15:41 | 0:25:59 | 0:17:33 | 0:08:26 | ovh | master | rados/thrash-erasure-code/{clusters/{fixed-2.yaml openstack.yaml} fast/normal.yaml fs/xfs.yaml leveldb.yaml msgr-failures/osd-delay.yaml objectstore/bluestore.yaml rados.yaml thrashers/fastread.yaml workloads/ec-small-objects.yaml} | 2 | |||
Failure Reason:
Command failed on ovh080 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph pg dump --format=json' |
||||||||||||||
fail | 1983738 | 2017-12-20 03:23:42 | 2017-12-20 03:51:26 | 2017-12-20 04:13:26 | 0:22:00 | 0:15:29 | 0:06:31 | ovh | master | ubuntu | 16.04 | rados/thrash-erasure-code-isa/{arch/x86_64.yaml clusters/{fixed-2.yaml openstack.yaml} fs/xfs.yaml leveldb.yaml msgr-failures/osd-delay.yaml objectstore/bluestore.yaml rados.yaml supported/ubuntu_latest.yaml thrashers/none.yaml workloads/ec-rados-plugin=isa-k=2-m=1.yaml} | 2 | |
Failure Reason:
Command failed on ovh057 with status 22: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph osd erasure-code-profile set isaprofile name=isaprofile plugin=isa k=2 technique=reed_sol_van m=1 ruleset-failure-domain=osd' |
||||||||||||||
fail | 1983739 | 2017-12-20 03:23:42 | 2017-12-20 03:51:27 | 2017-12-20 04:15:27 | 0:24:00 | 0:16:41 | 0:07:19 | ovh | master | rados/singleton/{all/thrash-rados.yaml fs/xfs.yaml msgr-failures/few.yaml msgr/random.yaml objectstore/filestore.yaml rados.yaml} | 2 | |||
Failure Reason:
Command failed on ovh046 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph pg dump --format=json' |
||||||||||||||
fail | 1983740 | 2017-12-20 03:23:43 | 2017-12-20 03:51:30 | 2017-12-20 04:15:30 | 0:24:00 | 0:17:17 | 0:06:43 | ovh | master | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/peering.yaml clusters/{fixed-2.yaml openstack.yaml} fs/xfs.yaml msgr-failures/few.yaml msgr/random.yaml objectstore/bluestore.yaml rados.yaml rocksdb.yaml thrashers/morepggrow.yaml workloads/cache-agent-big.yaml} | 2 | |||
Failure Reason:
Command failed on ovh040 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph pg dump --format=json' |
||||||||||||||
fail | 1983741 | 2017-12-20 03:23:44 | 2017-12-20 03:53:36 | 2017-12-20 04:17:36 | 0:24:00 | 0:16:02 | 0:07:58 | ovh | master | rados/monthrash/{ceph/ceph.yaml clusters/3-mons.yaml fs/xfs.yaml mon_kv_backend/leveldb.yaml msgr-failures/few.yaml msgr/simple.yaml objectstore/filestore.yaml rados.yaml thrashers/sync-many.yaml workloads/snaps-few-objects.yaml} | 2 | |||
Failure Reason:
Command failed on ovh017 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph pg dump --format=json' |
||||||||||||||
fail | 1983742 | 2017-12-20 03:23:44 | 2017-12-20 03:53:36 | 2017-12-20 04:39:36 | 0:46:00 | 0:37:00 | 0:09:00 | ovh | master | rados/thrash/{0-size-min-size-overrides/2-size-1-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/peering_and_degraded.yaml clusters/{fixed-2.yaml openstack.yaml} fs/btrfs.yaml msgr-failures/osd-delay.yaml msgr/simple.yaml objectstore/filestore.yaml rados.yaml rocksdb.yaml thrashers/none.yaml workloads/cache-agent-small.yaml} | 2 | |||
Failure Reason:
"2017-12-20 04:14:34.846276 mon.a mon.0 158.69.88.206:6789/0 103 : cluster [ERR] Health check failed: no active mgr (MGR_DOWN)" in cluster log |
||||||||||||||
fail | 1983743 | 2017-12-20 03:23:45 | 2017-12-20 03:53:36 | 2017-12-20 04:17:36 | 0:24:00 | 0:17:08 | 0:06:52 | ovh | master | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/normal.yaml clusters/{fixed-2.yaml openstack.yaml} fs/xfs.yaml msgr-failures/fastclose.yaml msgr/async.yaml objectstore/bluestore.yaml rados.yaml rocksdb.yaml thrashers/pggrow.yaml workloads/cache-pool-snaps-readproxy.yaml} | 2 | |||
Failure Reason:
Command failed on ovh083 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph pg dump --format=json' |
||||||||||||||
fail | 1983744 | 2017-12-20 03:23:47 | 2017-12-20 03:53:36 | 2017-12-20 04:13:36 | 0:20:00 | 0:14:56 | 0:05:04 | ovh | master | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/peering.yaml clusters/{fixed-2.yaml openstack.yaml} fs/btrfs.yaml msgr-failures/few.yaml msgr/random.yaml objectstore/filestore.yaml rados.yaml rocksdb.yaml thrashers/default.yaml workloads/cache-pool-snaps.yaml} | 2 | |||
Failure Reason:
Command failed on ovh015 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph pg dump --format=json' |
||||||||||||||
fail | 1983745 | 2017-12-20 03:23:47 | 2017-12-20 03:53:37 | 2017-12-20 04:53:40 | 1:00:03 | 0:51:04 | 0:08:59 | ovh | master | rados/singleton/{all/thrash_cache_writeback_proxy_none.yaml fs/xfs.yaml msgr-failures/many.yaml msgr/simple.yaml objectstore/bluestore.yaml rados.yaml} | 2 | |||
Failure Reason:
"2017-12-20 04:15:09.826777 mon.a mon.0 158.69.88.24:6789/0 102 : cluster [ERR] Health check failed: no active mgr (MGR_DOWN)" in cluster log |
||||||||||||||
fail | 1983746 | 2017-12-20 03:23:48 | 2017-12-20 03:53:40 | 2017-12-20 04:21:40 | 0:28:00 | 0:17:11 | 0:10:49 | ovh | master | rados/thrash/{0-size-min-size-overrides/2-size-1-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/peering_and_degraded.yaml clusters/{fixed-2.yaml openstack.yaml} fs/xfs.yaml msgr-failures/osd-delay.yaml msgr/simple.yaml objectstore/bluestore.yaml rados.yaml rocksdb.yaml thrashers/mapgap.yaml workloads/cache-snaps.yaml} | 2 | |||
Failure Reason:
Command failed on ovh006 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph pg dump --format=json' |
||||||||||||||
fail | 1983747 | 2017-12-20 03:23:49 | 2017-12-20 03:55:32 | 2017-12-20 04:21:32 | 0:26:00 | 0:18:02 | 0:07:58 | ovh | master | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/normal.yaml clusters/{fixed-2.yaml openstack.yaml} fs/btrfs.yaml msgr-failures/fastclose.yaml msgr/async.yaml objectstore/filestore.yaml rados.yaml rocksdb.yaml thrashers/morepggrow.yaml workloads/cache.yaml} | 2 | |||
Failure Reason:
Command failed on ovh090 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph pg dump --format=json' |
||||||||||||||
fail | 1983748 | 2017-12-20 03:23:50 | 2017-12-20 03:57:35 | 2017-12-20 04:41:35 | 0:44:00 | 0:37:01 | 0:06:59 | ovh | master | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/peering.yaml clusters/{fixed-2.yaml openstack.yaml} fs/xfs.yaml msgr-failures/few.yaml msgr/random.yaml objectstore/bluestore.yaml rados.yaml rocksdb.yaml thrashers/none.yaml workloads/pool-snaps-few-objects.yaml} | 2 | |||
Failure Reason:
Command failed on ovh003 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph pg dump --format=json' |
||||||||||||||
fail | 1983749 | 2017-12-20 03:23:50 | 2017-12-20 03:57:35 | 2017-12-20 04:23:35 | 0:26:00 | 0:16:43 | 0:09:17 | ovh | master | rados/thrash/{0-size-min-size-overrides/2-size-1-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/peering_and_degraded.yaml clusters/{fixed-2.yaml openstack.yaml} fs/btrfs.yaml msgr-failures/osd-delay.yaml msgr/simple.yaml objectstore/filestore.yaml rados.yaml rocksdb.yaml thrashers/pggrow.yaml workloads/rados_api_tests.yaml} | 2 | |||
Failure Reason:
Command failed on ovh064 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph pg dump --format=json' |
||||||||||||||
fail | 1983750 | 2017-12-20 03:23:51 | 2017-12-20 03:57:35 | 2017-12-20 04:21:35 | 0:24:00 | 0:15:36 | 0:08:24 | ovh | master | rados/basic/{clusters/{fixed-2.yaml openstack.yaml} fs/xfs.yaml mon_kv_backend/leveldb.yaml msgr-failures/many.yaml msgr/async.yaml objectstore/filestore.yaml rados.yaml tasks/rados_workunit_loadgen_mix.yaml} | 2 | |||
Failure Reason:
Command failed (workunit test rados/load-gen-mix.sh) on ovh085 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && cd -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=master TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="1" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.1/qa/workunits/rados/load-gen-mix.sh' |
||||||||||||||
fail | 1983751 | 2017-12-20 03:23:52 | 2017-12-20 03:57:35 | 2017-12-20 04:17:35 | 0:20:00 | 0:14:46 | 0:05:14 | ovh | master | rados/singleton-nomsgr/{all/full-tiering.yaml rados.yaml} | 1 | |||
Failure Reason:
"2017-12-20 04:14:33.054799 mon.a mon.0 158.69.88.25:6789/0 74 : cluster [ERR] Health check failed: no active mgr (MGR_DOWN)" in cluster log |
||||||||||||||
fail | 1983752 | 2017-12-20 03:23:53 | 2017-12-20 03:59:50 | 2017-12-20 04:23:49 | 0:23:59 | 0:16:40 | 0:07:19 | ovh | master | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/normal.yaml clusters/{fixed-2.yaml openstack.yaml} fs/xfs.yaml msgr-failures/fastclose.yaml msgr/async.yaml objectstore/bluestore.yaml rados.yaml rocksdb.yaml thrashers/default.yaml workloads/radosbench.yaml} | 2 | |||
Failure Reason:
Command failed on ovh078 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph pg dump --format=json' |
||||||||||||||
fail | 1983753 | 2017-12-20 03:23:54 | 2017-12-20 04:01:40 | 2017-12-20 04:31:39 | 0:29:59 | 0:14:58 | 0:15:01 | ovh | master | rados/thrash-erasure-code-big/{cluster/{12-osds.yaml openstack.yaml} fs/btrfs.yaml leveldb.yaml msgr-failures/few.yaml objectstore/filestore.yaml rados.yaml thrashers/morepggrow.yaml workloads/ec-rados-plugin=lrc-k=4-m=2-l=3.yaml} | 4 | |||
Failure Reason:
Command failed on ovh008 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap' |
||||||||||||||
fail | 1983754 | 2017-12-20 03:23:54 | 2017-12-20 04:03:29 | 2017-12-20 04:25:27 | 0:21:58 | 0:16:35 | 0:05:23 | ovh | master | rados/singleton/{all/watch-notify-same-primary.yaml fs/xfs.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/filestore.yaml rados.yaml} | 1 | |||
Failure Reason:
"2017-12-20 04:21:18.892234 mon.0 mon.0 158.69.88.51:6789/0 70 : cluster [ERR] Health check failed: no active mgr (MGR_DOWN)" in cluster log |
||||||||||||||
fail | 1983755 | 2017-12-20 03:23:55 | 2017-12-20 04:03:32 | 2017-12-20 04:33:31 | 0:29:59 | 0:21:03 | 0:08:56 | ovh | master | rados/thrash-erasure-code/{clusters/{fixed-2.yaml openstack.yaml} fast/fast.yaml fs/btrfs.yaml leveldb.yaml msgr-failures/fastclose.yaml objectstore/filestore.yaml rados.yaml thrashers/mapgap.yaml workloads/ec-snaps-few-objects-overwrites.yaml} | 2 | |||
Failure Reason:
Command failed on ovh013 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph pg dump --format=json' |
||||||||||||||
fail | 1983756 | 2017-12-20 03:23:56 | 2017-12-20 04:05:28 | 2017-12-20 04:33:27 | 0:27:59 | 0:20:09 | 0:07:50 | ovh | master | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/peering.yaml clusters/{fixed-2.yaml openstack.yaml} fs/btrfs.yaml msgr-failures/few.yaml msgr/random.yaml objectstore/filestore.yaml rados.yaml rocksdb.yaml thrashers/mapgap.yaml workloads/small-objects.yaml} | 2 | |||
Failure Reason:
Command failed on ovh092 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph pg dump --format=json' |
||||||||||||||
fail | 1983757 | 2017-12-20 03:23:57 | 2017-12-20 04:05:28 | 2017-12-20 05:15:28 | 1:10:00 | 1:05:58 | 0:04:02 | ovh | master | rados/objectstore/filestore-idempotent.yaml | 1 | |||
Failure Reason:
"2017-12-20 04:23:29.695720 mon.0 mon.0 158.69.88.63:6789/0 35 : cluster [ERR] Health check failed: no active mgr (MGR_DOWN)" in cluster log |
||||||||||||||
fail | 1983758 | 2017-12-20 03:23:58 | 2017-12-20 04:05:28 | 2017-12-20 04:35:27 | 0:29:59 | 0:21:03 | 0:08:56 | ovh | master | rados/thrash/{0-size-min-size-overrides/2-size-1-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/peering_and_degraded.yaml clusters/{fixed-2.yaml openstack.yaml} fs/xfs.yaml msgr-failures/osd-delay.yaml msgr/simple.yaml objectstore/bluestore.yaml rados.yaml rocksdb.yaml thrashers/morepggrow.yaml workloads/snaps-few-objects.yaml} | 2 | |||
Failure Reason:
Command failed on ovh010 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph pg dump --format=json' |
||||||||||||||
fail | 1983759 | 2017-12-20 03:23:59 | 2017-12-20 04:05:35 | 2017-12-20 04:51:35 | 0:46:00 | 0:37:23 | 0:08:37 | ovh | master | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/normal.yaml clusters/{fixed-2.yaml openstack.yaml} fs/btrfs.yaml msgr-failures/fastclose.yaml msgr/async.yaml objectstore/filestore.yaml rados.yaml rocksdb.yaml thrashers/none.yaml workloads/write_fadvise_dontneed.yaml} | 2 | |||
Failure Reason:
Command failed on ovh058 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph pg dump --format=json' |
||||||||||||||
fail | 1983760 | 2017-12-20 03:23:59 | 2017-12-20 04:05:48 | 2017-12-20 04:33:47 | 0:27:59 | 0:20:17 | 0:07:42 | ovh | master | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/peering.yaml clusters/{fixed-2.yaml openstack.yaml} fs/xfs.yaml msgr-failures/few.yaml msgr/random.yaml objectstore/bluestore.yaml rados.yaml rocksdb.yaml thrashers/pggrow.yaml workloads/admin_socket_objecter_requests.yaml} | 2 | |||
Failure Reason:
Command failed on ovh020 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph pg dump --format=json' |
||||||||||||||
fail | 1983761 | 2017-12-20 03:24:00 | 2017-12-20 04:05:48 | 2017-12-20 04:25:47 | 0:19:59 | 0:15:23 | 0:04:36 | ovh | master | rados/singleton/{all/admin-socket.yaml fs/xfs.yaml msgr-failures/many.yaml msgr/random.yaml objectstore/bluestore.yaml rados.yaml} | 1 | |||
Failure Reason:
"2017-12-20 04:23:17.353667 mon.a mon.0 158.69.88.7:6789/0 40 : cluster [ERR] Health check failed: no active mgr (MGR_DOWN)" in cluster log |
||||||||||||||
fail | 1983762 | 2017-12-20 03:24:01 | 2017-12-20 04:07:43 | 2017-12-20 04:33:42 | 0:25:59 | 0:16:38 | 0:09:21 | ovh | master | rados/thrash/{0-size-min-size-overrides/2-size-1-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/peering_and_degraded.yaml clusters/{fixed-2.yaml openstack.yaml} fs/btrfs.yaml msgr-failures/osd-delay.yaml msgr/simple.yaml objectstore/filestore.yaml rados.yaml rocksdb.yaml thrashers/default.yaml workloads/cache-agent-big.yaml} | 2 | |||
Failure Reason:
Command failed on ovh054 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph pg dump --format=json' |
||||||||||||||
fail | 1983763 | 2017-12-20 03:24:01 | 2017-12-20 04:11:27 | 2017-12-20 04:45:27 | 0:34:00 | 0:16:37 | 0:17:23 | ovh | master | rados/multimon/{clusters/9.yaml fs/xfs.yaml mon_kv_backend/leveldb.yaml msgr-failures/few.yaml msgr/random.yaml objectstore/filestore.yaml rados.yaml tasks/mon_clock_with_skews.yaml} | 3 | |||
Failure Reason:
'timechecks' |
||||||||||||||
fail | 1983764 | 2017-12-20 03:24:02 | 2017-12-20 04:11:31 | 2017-12-20 04:39:31 | 0:28:00 | 0:17:00 | 0:11:00 | ovh | master | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/normal.yaml clusters/{fixed-2.yaml openstack.yaml} fs/xfs.yaml msgr-failures/fastclose.yaml msgr/async.yaml objectstore/bluestore.yaml rados.yaml rocksdb.yaml thrashers/mapgap.yaml workloads/cache-agent-small.yaml} | 2 | |||
Failure Reason:
Command failed on ovh031 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph pg dump --format=json' |
||||||||||||||
fail | 1983765 | 2017-12-20 03:24:03 | 2017-12-20 04:11:46 | 2017-12-20 04:37:46 | 0:26:00 | 0:16:51 | 0:09:09 | ovh | master | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/peering.yaml clusters/{fixed-2.yaml openstack.yaml} fs/btrfs.yaml msgr-failures/few.yaml msgr/random.yaml objectstore/filestore.yaml rados.yaml rocksdb.yaml thrashers/morepggrow.yaml workloads/cache-pool-snaps-readproxy.yaml} | 2 | |||
Failure Reason:
Command failed on ovh057 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph pg dump --format=json' |
||||||||||||||
fail | 1983766 | 2017-12-20 03:24:04 | 2017-12-20 04:13:27 | 2017-12-20 04:51:27 | 0:38:00 | 0:31:23 | 0:06:37 | ovh | master | rados/thrash/{0-size-min-size-overrides/2-size-1-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/peering_and_degraded.yaml clusters/{fixed-2.yaml openstack.yaml} fs/xfs.yaml msgr-failures/osd-delay.yaml msgr/simple.yaml objectstore/bluestore.yaml rados.yaml rocksdb.yaml thrashers/none.yaml workloads/cache-pool-snaps.yaml} | 2 | |||
Failure Reason:
"2017-12-20 04:30:05.838918 mon.b mon.0 158.69.89.13:6789/0 115 : cluster [ERR] Health check failed: no active mgr (MGR_DOWN)" in cluster log |
||||||||||||||
fail | 1983767 | 2017-12-20 03:24:04 | 2017-12-20 04:13:35 | 2017-12-20 04:37:34 | 0:23:59 | 0:15:56 | 0:08:03 | ovh | master | rados/basic/{clusters/{fixed-2.yaml openstack.yaml} fs/btrfs.yaml mon_kv_backend/rocksdb.yaml msgr-failures/few.yaml msgr/random.yaml objectstore/bluestore.yaml rados.yaml tasks/rados_workunit_loadgen_mostlyread.yaml} | 2 | |||
Failure Reason:
Command failed (workunit test rados/load-gen-mostlyread.sh) on ovh030 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && cd -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=master TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="1" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.1/qa/workunits/rados/load-gen-mostlyread.sh' |
||||||||||||||
fail | 1983768 | 2017-12-20 03:24:05 | 2017-12-20 04:13:37 | 2017-12-20 04:49:37 | 0:36:00 | 0:20:39 | 0:15:21 | ovh | master | rados/thrash-erasure-code-shec/{clusters/{fixed-4.yaml openstack.yaml} fs/xfs.yaml leveldb.yaml msgr-failures/fastclose.yaml objectstore/filestore.yaml rados.yaml thrashers/default.yaml workloads/ec-rados-plugin=shec-k=4-m=3-c=2.yaml} | 4 | |||
Failure Reason:
Command failed on ovh046 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph pg dump --format=json' |
||||||||||||||
fail | 1983769 | 2017-12-20 03:24:06 | 2017-12-20 04:15:28 | 2017-12-20 04:39:28 | 0:24:00 | 0:16:50 | 0:07:10 | ovh | master | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/normal.yaml clusters/{fixed-2.yaml openstack.yaml} fs/btrfs.yaml msgr-failures/fastclose.yaml msgr/async.yaml objectstore/filestore.yaml rados.yaml rocksdb.yaml thrashers/pggrow.yaml workloads/cache-snaps.yaml} | 2 | |||
Failure Reason:
Command failed on ovh040 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph pg dump --format=json' |
||||||||||||||
fail | 1983770 | 2017-12-20 03:24:07 | 2017-12-20 04:15:31 | 2017-12-20 04:35:31 | 0:20:00 | 0:14:49 | 0:05:11 | ovh | master | rados/singleton/{all/cephtool.yaml fs/xfs.yaml msgr-failures/few.yaml msgr/simple.yaml objectstore/filestore.yaml rados.yaml} | 1 | |||
Failure Reason:
Command failed (workunit test cephtool/test.sh) on ovh015 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=master TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh' |
||||||||||||||
fail | 1983771 | 2017-12-20 03:24:08 | 2017-12-20 04:15:52 | 2017-12-20 04:39:51 | 0:23:59 | 0:17:02 | 0:06:57 | ovh | master | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/peering.yaml clusters/{fixed-2.yaml openstack.yaml} fs/xfs.yaml msgr-failures/few.yaml msgr/random.yaml objectstore/bluestore.yaml rados.yaml rocksdb.yaml thrashers/default.yaml workloads/cache.yaml} | 2 | |||
Failure Reason:
Command failed on ovh017 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph pg dump --format=json' |
||||||||||||||
fail | 1983772 | 2017-12-20 03:24:09 | 2017-12-20 04:15:52 | 2017-12-20 04:41:51 | 0:25:59 | 0:17:08 | 0:08:51 | ovh | master | rados/monthrash/{ceph/ceph.yaml clusters/9-mons.yaml fs/xfs.yaml mon_kv_backend/rocksdb.yaml msgr-failures/mon-delay.yaml msgr/async.yaml objectstore/bluestore.yaml rados.yaml thrashers/sync-many.yaml workloads/pool-create-delete.yaml} | 2 | |||
Failure Reason:
Command failed on ovh083 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph pg dump --format=json' |
||||||||||||||
fail | 1983773 | 2017-12-20 03:24:10 | 2017-12-20 04:17:45 | 2017-12-20 04:47:45 | 0:30:00 | 0:19:22 | 0:10:38 | ovh | master | rados/thrash-erasure-code/{clusters/{fixed-2.yaml openstack.yaml} fast/normal.yaml fs/xfs.yaml leveldb.yaml msgr-failures/few.yaml objectstore/bluestore.yaml rados.yaml thrashers/morepggrow.yaml workloads/ec-pool-snaps-few-objects-overwrites.yaml} | 2 | |||
Failure Reason:
Command failed on ovh077 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph pg dump --format=json' |
||||||||||||||
fail | 1983774 | 2017-12-20 03:24:10 | 2017-12-20 04:17:45 | 2017-12-20 04:41:45 | 0:24:00 | 0:09:51 | 0:14:09 | ovh | master | centos | 7.3 | rados/thrash-erasure-code-isa/{arch/x86_64.yaml clusters/{fixed-2.yaml openstack.yaml} fs/btrfs.yaml leveldb.yaml msgr-failures/fastclose.yaml objectstore/filestore.yaml rados.yaml supported/centos_latest.yaml thrashers/pggrow.yaml workloads/ec-rados-plugin=isa-k=2-m=1.yaml} | 2 | |
Failure Reason:
Command failed on ovh090 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap' |
||||||||||||||
fail | 1983775 | 2017-12-20 03:24:11 | 2017-12-20 04:17:45 | 2017-12-20 04:45:45 | 0:28:00 | 0:16:24 | 0:11:36 | ovh | master | rados/verify/{1thrash/none.yaml clusters/{fixed-2.yaml openstack.yaml} fs/btrfs.yaml mon_kv_backend/leveldb.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/filestore.yaml rados.yaml tasks/mon_recovery.yaml validater/lockdep.yaml} | 2 | |||
Failure Reason:
"2017-12-20 04:38:32.411278 mon.a mon.0 158.69.89.191:6789/0 3 : cluster [WRN] Health check failed: 6 osds exist in the crush map but not in the osdmap (OSD_ORPHAN)" in cluster log |
||||||||||||||
fail | 1983776 | 2017-12-20 03:24:12 | 2017-12-20 04:21:34 | 2017-12-20 04:49:34 | 0:28:00 | 0:18:16 | 0:09:44 | ovh | master | rados/thrash/{0-size-min-size-overrides/2-size-1-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/peering_and_degraded.yaml clusters/{fixed-2.yaml openstack.yaml} fs/btrfs.yaml msgr-failures/osd-delay.yaml msgr/simple.yaml objectstore/filestore.yaml rados.yaml rocksdb.yaml thrashers/mapgap.yaml workloads/pool-snaps-few-objects.yaml} | 2 | |||
Failure Reason:
Command failed on ovh062 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph pg dump --format=json' |
||||||||||||||
fail | 1983777 | 2017-12-20 03:24:13 | 2017-12-20 04:21:37 | 2017-12-20 04:47:36 | 0:25:59 | 0:16:48 | 0:09:11 | ovh | master | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/normal.yaml clusters/{fixed-2.yaml openstack.yaml} fs/xfs.yaml msgr-failures/fastclose.yaml msgr/async.yaml objectstore/bluestore.yaml rados.yaml rocksdb.yaml thrashers/morepggrow.yaml workloads/rados_api_tests.yaml} | 2 | |||
Failure Reason:
Command failed on ovh018 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph pg dump --format=json' |
||||||||||||||
fail | 1983778 | 2017-12-20 03:24:13 | 2017-12-20 04:21:41 | 2017-12-20 04:57:41 | 0:36:00 | 0:28:19 | 0:07:41 | ovh | master | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/peering.yaml clusters/{fixed-2.yaml openstack.yaml} fs/btrfs.yaml msgr-failures/few.yaml msgr/random.yaml objectstore/filestore.yaml rados.yaml rocksdb.yaml thrashers/none.yaml workloads/radosbench.yaml} | 2 | |||
Failure Reason:
"2017-12-20 04:41:09.511404 mon.b mon.0 158.69.89.248:6789/0 67 : cluster [ERR] Health check failed: no active mgr (MGR_DOWN)" in cluster log |
||||||||||||||
fail | 1983779 | 2017-12-20 03:24:14 | 2017-12-20 04:23:37 | 2017-12-20 04:47:36 | 0:23:59 | 0:19:19 | 0:04:40 | ovh | master | rados/singleton/{all/divergent_priors.yaml fs/xfs.yaml msgr-failures/many.yaml msgr/async.yaml objectstore/bluestore.yaml rados.yaml} | 1 | |||
Failure Reason:
Command failed on ovh033 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph pg dump --format=json' |
||||||||||||||
fail | 1983780 | 2017-12-20 03:24:15 | 2017-12-20 04:24:05 | 2017-12-20 04:54:02 | 0:29:57 | 0:16:09 | 0:13:48 | ovh | master | rados/thrash/{0-size-min-size-overrides/2-size-1-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/peering_and_degraded.yaml clusters/{fixed-2.yaml openstack.yaml} fs/xfs.yaml msgr-failures/osd-delay.yaml msgr/simple.yaml objectstore/bluestore.yaml rados.yaml rocksdb.yaml thrashers/pggrow.yaml workloads/small-objects.yaml} | 2 | |||
Failure Reason:
Command failed on ovh073 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph pg dump --format=json' |
||||||||||||||
fail | 1983781 | 2017-12-20 03:24:16 | 2017-12-20 04:24:05 | 2017-12-20 04:52:02 | 0:27:57 | 0:19:02 | 0:08:55 | ovh | master | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/normal.yaml clusters/{fixed-2.yaml openstack.yaml} fs/btrfs.yaml msgr-failures/fastclose.yaml msgr/async.yaml objectstore/filestore.yaml rados.yaml rocksdb.yaml thrashers/default.yaml workloads/snaps-few-objects.yaml} | 2 | |||
Failure Reason:
Command failed on ovh036 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph pg dump --format=json' |
||||||||||||||
pass | 1983782 | 2017-12-20 03:24:17 | 2017-12-20 04:25:41 | 2017-12-20 04:45:40 | 0:19:59 | 0:10:51 | 0:09:08 | ovh | master | rados/objectstore/fusestore.yaml | 1 | |||
fail | 1983783 | 2017-12-20 03:24:18 | 2017-12-20 04:25:49 | 2017-12-20 04:59:48 | 0:33:59 | 0:19:49 | 0:14:10 | ovh | master | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/peering.yaml clusters/{fixed-2.yaml openstack.yaml} fs/xfs.yaml msgr-failures/few.yaml msgr/random.yaml objectstore/bluestore.yaml rados.yaml rocksdb.yaml thrashers/mapgap.yaml workloads/write_fadvise_dontneed.yaml} | 2 | |||
Failure Reason:
Command failed on ovh066 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph pg dump --format=json' |
||||||||||||||
fail | 1983784 | 2017-12-20 03:24:19 | 2017-12-20 04:31:38 | 2017-12-20 04:59:40 | 0:28:02 | 0:19:20 | 0:08:42 | ovh | master | rados/thrash/{0-size-min-size-overrides/2-size-1-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/peering_and_degraded.yaml clusters/{fixed-2.yaml openstack.yaml} fs/btrfs.yaml msgr-failures/osd-delay.yaml msgr/simple.yaml objectstore/filestore.yaml rados.yaml rocksdb.yaml thrashers/morepggrow.yaml workloads/admin_socket_objecter_requests.yaml} | 2 | |||
Failure Reason:
Command failed on ovh054 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph pg dump --format=json' |
||||||||||||||
fail | 1983785 | 2017-12-20 03:24:20 | 2017-12-20 04:31:40 | 2017-12-20 05:13:41 | 0:42:01 | 0:32:25 | 0:09:36 | ovh | master | rados/basic/{clusters/{fixed-2.yaml openstack.yaml} fs/xfs.yaml mon_kv_backend/leveldb.yaml msgr-failures/many.yaml msgr/simple.yaml objectstore/filestore.yaml rados.yaml tasks/readwrite.yaml} | 2 | |||
Failure Reason:
Command failed on ovh092 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph pg dump --format=json' |
||||||||||||||
pass | 1983786 | 2017-12-20 03:24:21 | 2017-12-20 04:33:38 | 2017-12-20 05:03:37 | 0:29:59 | 0:25:47 | 0:04:12 | ovh | master | rados/singleton-nomsgr/{all/msgr.yaml rados.yaml} | 1 | |||
fail | 1983787 | 2017-12-20 03:24:21 | 2017-12-20 04:33:38 | 2017-12-20 04:57:37 | 0:23:59 | 0:15:04 | 0:08:55 | ovh | master | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/normal.yaml clusters/{fixed-2.yaml openstack.yaml} fs/xfs.yaml msgr-failures/fastclose.yaml msgr/async.yaml objectstore/bluestore.yaml rados.yaml rocksdb.yaml thrashers/none.yaml workloads/cache-agent-big.yaml} | 2 | |||
Failure Reason:
Command failed on ovh093 with status 22: "sudo TESTDIR=/home/ubuntu/cephtest bash -c 'sudo ceph osd erasure-code-profile set teuthologyprofile ruleset-failure-domain=osd m=1 k=2'" |
||||||||||||||
fail | 1983788 | 2017-12-20 03:24:22 | 2017-12-20 04:33:38 | 2017-12-20 04:53:37 | 0:19:59 | 0:15:24 | 0:04:35 | ovh | master | rados/singleton/{all/divergent_priors2.yaml fs/xfs.yaml msgr-failures/few.yaml msgr/random.yaml objectstore/filestore.yaml rados.yaml} | 1 | |||
Failure Reason:
Command failed on ovh013 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph pg dump --format=json' |
||||||||||||||
fail | 1983789 | 2017-12-20 03:24:23 | 2017-12-20 04:33:43 | 2017-12-20 05:01:43 | 0:28:00 | 0:19:58 | 0:08:02 | ovh | master | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/peering.yaml clusters/{fixed-2.yaml openstack.yaml} fs/btrfs.yaml msgr-failures/few.yaml msgr/random.yaml objectstore/filestore.yaml rados.yaml rocksdb.yaml thrashers/pggrow.yaml workloads/cache-agent-small.yaml} | 2 | |||
Failure Reason:
Command failed on ovh020 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph pg dump --format=json' |
||||||||||||||
fail | 1983790 | 2017-12-20 03:24:24 | 2017-12-20 04:33:48 | 2017-12-20 05:01:48 | 0:28:00 | 0:19:48 | 0:08:12 | ovh | master | rados/thrash/{0-size-min-size-overrides/2-size-1-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/peering_and_degraded.yaml clusters/{fixed-2.yaml openstack.yaml} fs/xfs.yaml msgr-failures/osd-delay.yaml msgr/simple.yaml objectstore/bluestore.yaml rados.yaml rocksdb.yaml thrashers/default.yaml workloads/cache-pool-snaps-readproxy.yaml} | 2 | |||
Failure Reason:
Command failed on ovh010 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph pg dump --format=json' |
||||||||||||||
fail | 1983791 | 2017-12-20 03:24:25 | 2017-12-20 04:35:29 | 2017-12-20 04:57:28 | 0:21:59 | 0:13:15 | 0:08:44 | ovh | master | rados/thrash-erasure-code/{clusters/{fixed-2.yaml openstack.yaml} fast/fast.yaml fs/btrfs.yaml leveldb.yaml msgr-failures/osd-delay.yaml objectstore/filestore.yaml rados.yaml thrashers/pggrow.yaml workloads/ec-rados-plugin=jerasure-k=2-m=1.yaml} | 2 | |||
Failure Reason:
Command failed on ovh057 with status 1: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap' |
||||||||||||||
fail | 1983792 | 2017-12-20 03:24:26 | 2017-12-20 04:35:32 | 2017-12-20 05:09:32 | 0:34:00 | 0:17:33 | 0:16:27 | ovh | master | rados/thrash-erasure-code-big/{cluster/{12-osds.yaml openstack.yaml} fs/xfs.yaml leveldb.yaml msgr-failures/osd-delay.yaml objectstore/bluestore.yaml rados.yaml thrashers/pggrow.yaml workloads/ec-rados-plugin=lrc-k=4-m=2-l=3.yaml} | 4 | |||
Failure Reason:
Command failed on ovh009 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph pg dump --format=json' |
||||||||||||||
fail | 1983793 | 2017-12-20 03:24:27 | 2017-12-20 04:37:45 | 2017-12-20 05:01:44 | 0:23:59 | 0:15:23 | 0:08:36 | ovh | master | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/normal.yaml clusters/{fixed-2.yaml openstack.yaml} fs/btrfs.yaml msgr-failures/fastclose.yaml msgr/async.yaml objectstore/filestore.yaml rados.yaml rocksdb.yaml thrashers/mapgap.yaml workloads/cache-pool-snaps.yaml} | 2 | |||
Failure Reason:
Command failed on ovh053 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph pg dump --format=json' |
||||||||||||||
fail | 1983794 | 2017-12-20 03:24:28 | 2017-12-20 04:37:48 | 2017-12-20 05:03:48 | 0:26:00 | 0:17:26 | 0:08:34 | ovh | master | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/peering.yaml clusters/{fixed-2.yaml openstack.yaml} fs/xfs.yaml msgr-failures/few.yaml msgr/random.yaml objectstore/bluestore.yaml rados.yaml rocksdb.yaml thrashers/morepggrow.yaml workloads/cache-snaps.yaml} | 2 | |||
Failure Reason:
Command failed on ovh100 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph pg dump --format=json' |
||||||||||||||
fail | 1983795 | 2017-12-20 03:24:28 | 2017-12-20 04:39:29 | 2017-12-20 04:59:29 | 0:20:00 | 0:15:25 | 0:04:35 | ovh | master | rados/singleton/{all/dump-stuck.yaml fs/xfs.yaml msgr-failures/many.yaml msgr/simple.yaml objectstore/bluestore.yaml rados.yaml} | 1 | |||
Failure Reason:
Command failed on ovh017 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph pg dump --format=json' |
||||||||||||||
fail | 1983796 | 2017-12-20 03:24:29 | 2017-12-20 04:39:33 | 2017-12-20 05:09:32 | 0:29:59 | 0:22:21 | 0:07:38 | ovh | master | rados/thrash/{0-size-min-size-overrides/2-size-1-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/peering_and_degraded.yaml clusters/{fixed-2.yaml openstack.yaml} fs/btrfs.yaml msgr-failures/osd-delay.yaml msgr/simple.yaml objectstore/filestore.yaml rados.yaml rocksdb.yaml thrashers/none.yaml workloads/cache.yaml} | 2 | |||
Failure Reason:
"2017-12-20 04:58:48.368121 mon.b mon.0 158.69.90.54:6789/0 102 : cluster [ERR] Health check failed: no active mgr (MGR_DOWN)" in cluster log |
||||||||||||||
fail | 1983797 | 2017-12-20 03:24:30 | 2017-12-20 04:39:39 | 2017-12-20 05:05:40 | 0:26:01 | 0:18:33 | 0:07:28 | ovh | master | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/normal.yaml clusters/{fixed-2.yaml openstack.yaml} fs/xfs.yaml msgr-failures/fastclose.yaml msgr/async.yaml objectstore/bluestore.yaml rados.yaml rocksdb.yaml thrashers/pggrow.yaml workloads/pool-snaps-few-objects.yaml} | 2 | |||
Failure Reason:
Command failed on ovh040 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph pg dump --format=json' |
||||||||||||||
fail | 1983798 | 2017-12-20 03:24:31 | 2017-12-20 04:39:53 | 2017-12-20 05:05:52 | 0:25:59 | 0:17:03 | 0:08:56 | ovh | master | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/peering.yaml clusters/{fixed-2.yaml openstack.yaml} fs/btrfs.yaml msgr-failures/few.yaml msgr/random.yaml objectstore/filestore.yaml rados.yaml rocksdb.yaml thrashers/default.yaml workloads/rados_api_tests.yaml} | 2 | |||
Failure Reason:
Command failed on ovh090 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph pg dump --format=json' |
||||||||||||||
fail | 1983799 | 2017-12-20 03:24:32 | 2017-12-20 04:41:37 | 2017-12-20 05:05:37 | 0:24:00 | 0:16:51 | 0:07:09 | ovh | master | rados/thrash/{0-size-min-size-overrides/2-size-1-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/peering_and_degraded.yaml clusters/{fixed-2.yaml openstack.yaml} fs/xfs.yaml msgr-failures/osd-delay.yaml msgr/simple.yaml objectstore/bluestore.yaml rados.yaml rocksdb.yaml thrashers/mapgap.yaml workloads/radosbench.yaml} | 2 | |||
Failure Reason:
Command failed on ovh083 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph pg dump --format=json' |
||||||||||||||
fail | 1983800 | 2017-12-20 03:24:33 | 2017-12-20 04:41:56 | 2017-12-20 05:05:55 | 0:23:59 | 0:17:04 | 0:06:55 | ovh | master | rados/basic/{clusters/{fixed-2.yaml openstack.yaml} fs/btrfs.yaml mon_kv_backend/rocksdb.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore.yaml rados.yaml tasks/repair_test.yaml} | 2 | |||
Failure Reason:
Command failed on ovh003 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph pg dump --format=json' |
||||||||||||||
fail | 1983801 | 2017-12-20 03:24:34 | 2017-12-20 04:41:56 | 2017-12-20 05:07:55 | 0:25:59 | 0:16:21 | 0:09:38 | ovh | master | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/normal.yaml clusters/{fixed-2.yaml openstack.yaml} fs/btrfs.yaml msgr-failures/fastclose.yaml msgr/async.yaml objectstore/filestore.yaml rados.yaml rocksdb.yaml thrashers/morepggrow.yaml workloads/small-objects.yaml} | 2 | |||
Failure Reason:
Command failed on ovh006 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph pg dump --format=json' |
||||||||||||||
fail | 1983802 | 2017-12-20 03:24:35 | 2017-12-20 04:45:29 | 2017-12-20 05:05:28 | 0:19:59 | 0:16:21 | 0:03:38 | ovh | master | rados/singleton/{all/ec-lost-unfound.yaml fs/xfs.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/filestore.yaml rados.yaml} | 1 | |||
Failure Reason:
Command failed on ovh082 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph pg dump --format=json' |
||||||||||||||
fail | 1983803 | 2017-12-20 03:24:36 | 2017-12-20 04:45:42 | 2017-12-20 05:25:41 | 0:39:59 | 0:33:16 | 0:06:43 | ovh | master | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/peering.yaml clusters/{fixed-2.yaml openstack.yaml} fs/xfs.yaml msgr-failures/few.yaml msgr/random.yaml objectstore/bluestore.yaml rados.yaml rocksdb.yaml thrashers/none.yaml workloads/snaps-few-objects.yaml} | 2 | |||
Failure Reason:
Command failed on ovh077 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph pg dump --format=json' |
||||||||||||||
fail | 1983804 | 2017-12-20 03:24:37 | 2017-12-20 04:45:46 | 2017-12-20 05:11:46 | 0:26:00 | 0:17:30 | 0:08:30 | ovh | master | rados/monthrash/{ceph/ceph.yaml clusters/3-mons.yaml fs/xfs.yaml mon_kv_backend/leveldb.yaml msgr-failures/few.yaml msgr/random.yaml objectstore/filestore.yaml rados.yaml thrashers/sync.yaml workloads/rados_5925.yaml} | 2 | |||
Failure Reason:
Command failed on ovh018 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph pg dump --format=json' |
||||||||||||||
fail | 1983805 | 2017-12-20 03:24:37 | 2017-12-20 04:47:38 | 2017-12-20 05:19:37 | 0:31:59 | 0:16:56 | 0:15:03 | ovh | master | rados/thrash/{0-size-min-size-overrides/2-size-1-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/peering_and_degraded.yaml clusters/{fixed-2.yaml openstack.yaml} fs/btrfs.yaml msgr-failures/osd-delay.yaml msgr/simple.yaml objectstore/filestore.yaml rados.yaml rocksdb.yaml thrashers/pggrow.yaml workloads/write_fadvise_dontneed.yaml} | 2 | |||
Failure Reason:
Command failed on ovh002 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph pg dump --format=json' |
||||||||||||||
fail | 1983806 | 2017-12-20 03:24:38 | 2017-12-20 04:47:38 | 2017-12-20 05:25:37 | 0:37:59 | 0:26:18 | 0:11:41 | ovh | master | rados/multimon/{clusters/21.yaml fs/xfs.yaml mon_kv_backend/rocksdb.yaml msgr-failures/many.yaml msgr/simple.yaml objectstore/bluestore.yaml rados.yaml tasks/mon_recovery.yaml} | 3 | |||
Failure Reason:
"2017-12-20 05:12:27.919665 mon.c mon.0 158.69.91.106:6789/0 108 : cluster [ERR] Health check failed: no active mgr (MGR_DOWN)" in cluster log |
||||||||||||||
pass | 1983807 | 2017-12-20 03:24:39 | 2017-12-20 04:47:46 | 2017-12-20 05:05:46 | 0:18:00 | 0:12:46 | 0:05:14 | ovh | master | rados/objectstore/keyvaluedb.yaml | 1 | |||
fail | 1983808 | 2017-12-20 03:24:40 | 2017-12-20 04:49:44 | 2017-12-20 05:15:44 | 0:26:00 | 0:18:19 | 0:07:41 | ovh | master | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/normal.yaml clusters/{fixed-2.yaml openstack.yaml} fs/xfs.yaml msgr-failures/fastclose.yaml msgr/simple.yaml objectstore/bluestore.yaml rados.yaml rocksdb.yaml thrashers/default.yaml workloads/admin_socket_objecter_requests.yaml} | 2 | |||
Failure Reason:
Command failed on ovh070 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph pg dump --format=json' |
||||||||||||||
fail | 1983809 | 2017-12-20 03:24:40 | 2017-12-20 04:49:44 | 2017-12-20 05:15:44 | 0:26:00 | 0:17:17 | 0:08:43 | ovh | master | rados/thrash-erasure-code/{clusters/{fixed-2.yaml openstack.yaml} fast/normal.yaml fs/xfs.yaml leveldb.yaml msgr-failures/fastclose.yaml objectstore/bluestore.yaml rados.yaml thrashers/default.yaml workloads/ec-rados-plugin=jerasure-k=3-m=1.yaml} | 2 | |||
Failure Reason:
Command failed on ovh062 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph pg dump --format=json' |