User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Pass | Fail | Dead |
---|---|---|---|---|---|---|---|---|---|---|---|
teuthology | 2018-12-05 05:20:01 | 2018-12-15 11:56:49 | 2018-12-16 10:50:37 | 22:53:48 | kcephfs | mimic | ovh | 802ee23 | 91 | 142 | 17 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
pass | 3308632 | 2018-12-05 05:20:24 | 2018-12-15 11:48:04 | 2018-12-15 13:02:04 | 1:14:00 | 0:21:37 | 0:52:23 | ovh | master | ubuntu | 16.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/kclient_workunit_direct_io.yaml} | 3 | |
fail | 3308633 | 2018-12-05 05:20:25 | 2018-12-15 11:48:29 | 2018-12-15 13:48:30 | 2:00:01 | 0:09:59 | 1:50:02 | ovh | master | rhel | 7.5 | kcephfs/mixed-clients/{begin.yaml clusters/1-mds-2-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/kernel_cfuse_workunits_dbench_iozone.yaml} | 4 | |
Failure Reason:
Command failed on ovh042 with status 1: '\n sudo yum -y install bison\n ' |
||||||||||||||
fail | 3308634 | 2018-12-05 05:20:26 | 2018-12-15 11:50:38 | 2018-12-15 16:56:42 | 5:06:04 | 0:10:18 | 4:55:46 | ovh | master | rhel | 7.5 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/auto-repair.yaml whitelist_health.yaml} | 6 | |
Failure Reason:
Command failed on ovh061 with status 1: '\n sudo yum -y install bison\n ' |
||||||||||||||
dead | 3308635 | 2018-12-05 05:20:27 | 2018-12-15 11:54:30 | 2018-12-16 00:01:39 | 12:07:09 | ovh | master | centos | 7.4 | kcephfs/thrash/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} thrashers/default.yaml thrashosds-health.yaml whitelist_health.yaml workloads/kclient_workunit_suites_ffsb.yaml} | 3 | |||
pass | 3308636 | 2018-12-05 05:20:28 | 2018-12-15 11:56:49 | 2018-12-15 13:10:49 | 1:14:00 | 0:51:29 | 0:22:31 | ovh | master | centos | 7.4 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/kclient_workunit_kernel_untar_build.yaml} | 3 | |
pass | 3308637 | 2018-12-05 05:20:28 | 2018-12-15 12:01:01 | 2018-12-15 18:23:06 | 6:22:05 | 0:21:53 | 6:00:12 | ovh | master | ubuntu | 16.04 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/backtrace.yaml whitelist_health.yaml} | 6 | |
fail | 3308638 | 2018-12-05 05:20:29 | 2018-12-15 12:01:51 | 2018-12-15 14:01:52 | 2:00:01 | 1:11:24 | 0:48:37 | ovh | master | centos | 7.4 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/kclient_workunit_misc.yaml} | 3 | |
Failure Reason:
"2018-12-15 13:38:40.549744 mon.b mon.0 158.69.67.252:6789/0 159 : cluster [WRN] Health check failed: Degraded data redundancy: 7574/119844 objects degraded (6.320%), 3 pgs degraded (PG_DEGRADED)" in cluster log |
||||||||||||||
pass | 3308639 | 2018-12-05 05:20:30 | 2018-12-15 12:20:39 | 2018-12-15 13:16:39 | 0:56:00 | 0:30:28 | 0:25:32 | ovh | master | centos | 7.4 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/kclient_workunit_o_trunc.yaml} | 3 | |
pass | 3308640 | 2018-12-05 05:20:31 | 2018-12-15 12:20:42 | 2018-12-15 15:38:44 | 3:18:02 | 0:27:20 | 2:50:42 | ovh | master | ubuntu | 16.04 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/client-limits.yaml whitelist_health.yaml} | 6 | |
pass | 3308641 | 2018-12-05 05:20:32 | 2018-12-15 12:22:19 | 2018-12-15 14:00:20 | 1:38:01 | 1:20:19 | 0:17:42 | ovh | master | ubuntu | 16.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/kclient_workunit_snaps.yaml} | 3 | |
pass | 3308642 | 2018-12-05 05:20:33 | 2018-12-15 12:22:49 | 2018-12-15 13:12:49 | 0:50:00 | 0:32:27 | 0:17:33 | ovh | master | ubuntu | 16.04 | kcephfs/thrash/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} thrashers/mds.yaml thrashosds-health.yaml whitelist_health.yaml workloads/kclient_workunit_suites_iozone.yaml} | 3 | |
fail | 3308643 | 2018-12-05 05:20:34 | 2018-12-15 12:24:22 | 2018-12-15 17:14:25 | 4:50:03 | 0:10:26 | 4:39:37 | ovh | master | rhel | 7.5 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/client-recovery.yaml whitelist_health.yaml} | 6 | |
Failure Reason:
Command failed on ovh010 with status 1: '\n sudo yum -y install bison\n ' |
||||||||||||||
fail | 3308644 | 2018-12-05 05:20:34 | 2018-12-15 12:28:54 | 2018-12-15 13:48:55 | 1:20:01 | 0:08:15 | 1:11:46 | ovh | master | rhel | 7.5 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/kclient_workunit_suites_dbench.yaml} | 3 | |
Failure Reason:
Command failed on ovh012 with status 1: '\n sudo yum -y install bison\n ' |
||||||||||||||
fail | 3308645 | 2018-12-05 05:20:35 | 2018-12-15 12:28:54 | 2018-12-15 13:56:55 | 1:28:01 | 1:05:05 | 0:22:56 | ovh | master | centos | 7.4 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/kclient_workunit_suites_ffsb.yaml} | 3 | |
Failure Reason:
"2018-12-15 13:09:54.974941 mon.a mon.0 158.69.65.123:6789/0 229 : cluster [WRN] Health check failed: 1 MDSs report slow metadata IOs (MDS_SLOW_METADATA_IO)" in cluster log |
||||||||||||||
dead | 3308646 | 2018-12-05 05:20:36 | 2018-12-15 12:28:55 | 2018-12-16 00:31:06 | 12:02:11 | ovh | master | centos | 7.4 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/config-commands.yaml whitelist_health.yaml} | — | |||
fail | 3308647 | 2018-12-05 05:20:37 | 2018-12-15 12:30:56 | 2018-12-15 14:00:56 | 1:30:00 | 0:09:58 | 1:20:02 | ovh | master | rhel | 7.5 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/kclient_workunit_suites_fsstress.yaml} | 3 | |
Failure Reason:
Command failed on ovh082 with status 1: '\n sudo yum -y install bison\n ' |
||||||||||||||
fail | 3308648 | 2018-12-05 05:20:37 | 2018-12-15 12:30:56 | 2018-12-15 13:52:56 | 1:22:00 | 0:09:00 | 1:13:00 | ovh | master | rhel | 7.5 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/kclient_workunit_suites_fsx.yaml} | 3 | |
Failure Reason:
Command failed on ovh001 with status 1: '\n sudo yum -y install bison\n ' |
||||||||||||||
fail | 3308649 | 2018-12-05 05:20:38 | 2018-12-15 12:33:17 | 2018-12-15 21:49:25 | 9:16:08 | ovh | master | ubuntu | 18.04 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/damage.yaml whitelist_health.yaml} | 6 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh002.front.sepia.ceph.com |
||||||||||||||
pass | 3308650 | 2018-12-05 05:20:39 | 2018-12-15 12:34:54 | 2018-12-15 14:12:54 | 1:38:00 | 1:10:04 | 0:27:56 | ovh | master | ubuntu | 16.04 | kcephfs/thrash/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} thrashers/mon.yaml thrashosds-health.yaml whitelist_health.yaml workloads/kclient_workunit_suites_ffsb.yaml} | 3 | |
pass | 3308651 | 2018-12-05 05:20:40 | 2018-12-15 12:36:19 | 2018-12-15 13:24:19 | 0:48:00 | 0:17:30 | 0:30:30 | ovh | master | ubuntu | 16.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/kclient_workunit_suites_fsync.yaml} | 3 | |
fail | 3308652 | 2018-12-05 05:20:40 | 2018-12-15 12:38:53 | 2018-12-15 16:00:55 | 3:22:02 | ovh | master | ubuntu | 18.04 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/data-scan.yaml whitelist_health.yaml} | 6 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh046.front.sepia.ceph.com |
||||||||||||||
pass | 3308653 | 2018-12-05 05:20:41 | 2018-12-15 12:40:56 | 2018-12-15 14:04:57 | 1:24:01 | 0:51:11 | 0:32:50 | ovh | master | centos | 7.4 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/kclient_workunit_suites_iozone.yaml} | 3 | |
fail | 3308654 | 2018-12-05 05:20:42 | 2018-12-15 12:43:17 | 2018-12-15 13:43:16 | 0:59:59 | 0:08:29 | 0:51:30 | ovh | master | rhel | 7.5 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/kclient_workunit_suites_pjd.yaml} | 3 | |
Failure Reason:
Command failed on ovh048 with status 1: '\n sudo yum -y install bison\n ' |
||||||||||||||
fail | 3308655 | 2018-12-05 05:20:43 | 2018-12-15 12:50:53 | 2018-12-15 17:08:56 | 4:18:03 | 0:10:01 | 4:08:02 | ovh | master | rhel | 7.5 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/failover.yaml whitelist_health.yaml} | 6 | |
Failure Reason:
Command failed on ovh038 with status 1: '\n sudo yum -y install bison\n ' |
||||||||||||||
fail | 3308656 | 2018-12-05 05:20:44 | 2018-12-15 12:50:53 | 2018-12-15 13:16:52 | 0:25:59 | ovh | master | ubuntu | 18.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/kclient_workunit_trivial_sync.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh041.front.sepia.ceph.com |
||||||||||||||
fail | 3308657 | 2018-12-05 05:20:44 | 2018-12-15 12:52:53 | 2018-12-15 13:24:53 | 0:32:00 | ovh | master | ubuntu | 18.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/kclient_workunit_direct_io.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh037.front.sepia.ceph.com |
||||||||||||||
pass | 3308658 | 2018-12-05 05:20:45 | 2018-12-15 13:02:16 | 2018-12-15 14:50:17 | 1:48:01 | 1:07:01 | 0:41:00 | ovh | master | ubuntu | 16.04 | kcephfs/mixed-clients/{begin.yaml clusters/1-mds-2-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/kernel_cfuse_workunits_untarbuild_blogbench.yaml} | 4 | |
fail | 3308659 | 2018-12-05 05:20:46 | 2018-12-15 13:02:52 | 2018-12-15 15:16:53 | 2:14:01 | ovh | master | ubuntu | 18.04 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/forward-scrub.yaml whitelist_health.yaml} | 6 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh033.front.sepia.ceph.com |
||||||||||||||
pass | 3308660 | 2018-12-05 05:20:47 | 2018-12-15 13:06:49 | 2018-12-15 14:18:49 | 1:12:00 | 0:38:08 | 0:33:52 | ovh | master | centos | 7.4 | kcephfs/thrash/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} thrashers/default.yaml thrashosds-health.yaml whitelist_health.yaml workloads/kclient_workunit_suites_iozone.yaml} | 3 | |
pass | 3308661 | 2018-12-05 05:20:47 | 2018-12-15 13:09:10 | 2018-12-15 14:29:11 | 1:20:01 | 0:46:50 | 0:33:11 | ovh | master | centos | 7.4 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/kclient_workunit_kernel_untar_build.yaml} | 3 | |
dead | 3308662 | 2018-12-05 05:20:48 | 2018-12-15 13:11:02 | 2018-12-16 01:13:13 | 12:02:11 | ovh | master | rhel | 7.5 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/journal-repair.yaml whitelist_health.yaml} | — | |||
pass | 3308663 | 2018-12-05 05:20:49 | 2018-12-15 13:13:00 | 2018-12-15 14:31:01 | 1:18:01 | 1:02:51 | 0:15:10 | ovh | master | ubuntu | 16.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/kclient_workunit_misc.yaml} | 3 | |
fail | 3308664 | 2018-12-05 05:20:50 | 2018-12-15 13:13:05 | 2018-12-15 14:07:05 | 0:54:00 | 0:08:35 | 0:45:25 | ovh | master | rhel | 7.5 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/kclient_workunit_o_trunc.yaml} | 3 | |
Failure Reason:
Command failed on ovh085 with status 1: '\n sudo yum -y install bison\n ' |
||||||||||||||
pass | 3308665 | 2018-12-05 05:20:50 | 2018-12-15 13:16:49 | 2018-12-15 20:58:57 | 7:42:08 | 0:22:20 | 7:19:48 | ovh | master | centos | 7.4 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/mds-flush.yaml whitelist_health.yaml} | 6 | |
fail | 3308666 | 2018-12-05 05:20:51 | 2018-12-15 13:16:49 | 2018-12-15 15:22:50 | 2:06:01 | 1:33:48 | 0:32:13 | ovh | master | centos | 7.4 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/kclient_workunit_snaps.yaml} | 3 | |
Failure Reason:
"2018-12-15 14:29:07.732248 mon.a mon.0 158.69.66.252:6789/0 417 : cluster [ERR] Health check failed: mon a is very low on available space (MON_DISK_CRIT)" in cluster log |
||||||||||||||
fail | 3308667 | 2018-12-05 05:20:52 | 2018-12-15 13:16:53 | 2018-12-15 14:20:54 | 1:04:01 | 0:09:39 | 0:54:22 | ovh | master | rhel | 7.5 | kcephfs/thrash/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} thrashers/mds.yaml thrashosds-health.yaml whitelist_health.yaml workloads/kclient_workunit_suites_ffsb.yaml} | 3 | |
Failure Reason:
Command failed on ovh096 with status 1: '\n sudo yum -y install bison\n ' |
||||||||||||||
fail | 3308668 | 2018-12-05 05:20:52 | 2018-12-15 13:24:32 | 2018-12-15 16:08:33 | 2:44:01 | 0:25:23 | 2:18:38 | ovh | master | centos | 7.4 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/mds-full.yaml whitelist_health.yaml} | 6 | |
Failure Reason:
Test failure: test_full_fclose (tasks.cephfs.test_full.TestClusterFull) |
||||||||||||||
fail | 3308669 | 2018-12-05 05:20:53 | 2018-12-15 13:24:54 | 2018-12-15 14:22:54 | 0:58:00 | 0:08:35 | 0:49:25 | ovh | master | rhel | 7.5 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/kclient_workunit_suites_dbench.yaml} | 3 | |
Failure Reason:
Command failed on ovh067 with status 1: '\n sudo yum -y install bison\n ' |
||||||||||||||
pass | 3308670 | 2018-12-05 05:20:54 | 2018-12-15 13:25:17 | 2018-12-15 14:25:17 | 1:00:00 | 0:40:42 | 0:19:18 | ovh | master | ubuntu | 16.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/kclient_workunit_suites_ffsb.yaml} | 3 | |
pass | 3308671 | 2018-12-05 05:20:55 | 2018-12-15 13:30:31 | 2018-12-15 18:20:35 | 4:50:04 | 0:22:35 | 4:27:29 | ovh | master | ubuntu | 16.04 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/pool-perm.yaml whitelist_health.yaml} | 6 | |
pass | 3308672 | 2018-12-05 05:20:56 | 2018-12-15 13:32:40 | 2018-12-15 14:58:41 | 1:26:01 | 0:30:52 | 0:55:09 | ovh | master | ubuntu | 16.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/kclient_workunit_suites_fsstress.yaml} | 3 | |
pass | 3308673 | 2018-12-05 05:20:56 | 2018-12-15 13:32:53 | 2018-12-15 15:04:53 | 1:32:00 | 0:52:43 | 0:39:17 | ovh | master | ubuntu | 16.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/kclient_workunit_suites_fsx.yaml} | 3 | |
pass | 3308674 | 2018-12-05 05:20:57 | 2018-12-15 13:43:18 | 2018-12-15 15:43:19 | 2:00:01 | 0:24:22 | 1:35:39 | ovh | master | centos | 7.4 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/sessionmap.yaml whitelist_health.yaml} | 6 | |
pass | 3308675 | 2018-12-05 05:20:58 | 2018-12-15 13:48:42 | 2018-12-15 14:50:42 | 1:02:00 | 0:39:09 | 0:22:51 | ovh | master | centos | 7.4 | kcephfs/thrash/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} thrashers/mon.yaml thrashosds-health.yaml whitelist_health.yaml workloads/kclient_workunit_suites_iozone.yaml} | 3 | |
fail | 3308676 | 2018-12-05 05:20:59 | 2018-12-15 13:48:56 | 2018-12-15 14:18:55 | 0:29:59 | ovh | master | ubuntu | 18.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/kclient_workunit_suites_fsync.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh051.front.sepia.ceph.com |
||||||||||||||
fail | 3308677 | 2018-12-05 05:20:59 | 2018-12-15 13:52:57 | 2018-12-15 20:11:02 | 6:18:05 | 0:10:31 | 6:07:34 | ovh | master | rhel | 7.5 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/strays.yaml whitelist_health.yaml} | 6 | |
Failure Reason:
Command failed on ovh070 with status 1: '\n sudo yum -y install bison\n ' |
||||||||||||||
fail | 3308678 | 2018-12-05 05:21:00 | 2018-12-15 13:57:07 | 2018-12-15 15:13:07 | 1:16:00 | 0:51:03 | 0:24:57 | ovh | master | centos | 7.4 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/kclient_workunit_suites_iozone.yaml} | 3 | |
Failure Reason:
"2018-12-15 14:46:20.488520 mon.b mon.0 158.69.64.184:6789/0 246 : cluster [WRN] Health check failed: 1 MDSs report slow metadata IOs (MDS_SLOW_METADATA_IO)" in cluster log |
||||||||||||||
pass | 3308679 | 2018-12-05 05:21:01 | 2018-12-15 14:00:31 | 2018-12-15 14:42:31 | 0:42:00 | 0:18:52 | 0:23:08 | ovh | master | ubuntu | 16.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/kclient_workunit_suites_pjd.yaml} | 3 | |
pass | 3308680 | 2018-12-05 05:21:02 | 2018-12-15 14:00:57 | 2018-12-15 19:05:01 | 5:04:04 | 0:36:21 | 4:27:43 | ovh | master | ubuntu | 16.04 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/volume-client.yaml whitelist_health.yaml} | 6 | |
pass | 3308681 | 2018-12-05 05:21:02 | 2018-12-15 14:02:04 | 2018-12-15 14:40:04 | 0:38:00 | 0:16:12 | 0:21:48 | ovh | master | centos | 7.4 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/kclient_workunit_trivial_sync.yaml} | 3 | |
pass | 3308682 | 2018-12-05 05:21:03 | 2018-12-15 14:05:09 | 2018-12-15 14:53:09 | 0:48:00 | 0:17:37 | 0:30:23 | ovh | master | centos | 7.4 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/kclient_workunit_direct_io.yaml} | 3 | |
pass | 3308683 | 2018-12-05 05:21:04 | 2018-12-15 14:07:07 | 2018-12-15 15:51:08 | 1:44:01 | 1:18:03 | 0:25:58 | ovh | master | ubuntu | 16.04 | kcephfs/mixed-clients/{begin.yaml clusters/1-mds-2-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/kernel_cfuse_workunits_dbench_iozone.yaml} | 4 | |
pass | 3308684 | 2018-12-05 05:21:05 | 2018-12-15 14:08:52 | 2018-12-15 18:10:55 | 4:02:03 | 0:22:04 | 3:39:59 | ovh | master | centos | 7.4 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/auto-repair.yaml whitelist_health.yaml} | 6 | |
fail | 3308685 | 2018-12-05 05:21:06 | 2018-12-15 14:13:07 | 2018-12-15 14:33:06 | 0:19:59 | ovh | master | ubuntu | 18.04 | kcephfs/thrash/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} thrashers/default.yaml thrashosds-health.yaml whitelist_health.yaml workloads/kclient_workunit_suites_ffsb.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh073.front.sepia.ceph.com |
||||||||||||||
pass | 3308686 | 2018-12-05 05:21:06 | 2018-12-15 14:16:57 | 2018-12-15 15:42:57 | 1:26:00 | 1:03:29 | 0:22:31 | ovh | master | ubuntu | 16.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/kclient_workunit_kernel_untar_build.yaml} | 3 | |
pass | 3308687 | 2018-12-05 05:21:07 | 2018-12-15 14:19:02 | 2018-12-15 17:45:04 | 3:26:02 | 0:20:18 | 3:05:44 | ovh | master | centos | 7.4 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/backtrace.yaml whitelist_health.yaml} | 6 | |
pass | 3308688 | 2018-12-05 05:21:08 | 2018-12-15 14:19:02 | 2018-12-15 15:53:03 | 1:34:01 | 0:57:28 | 0:36:33 | ovh | master | centos | 7.4 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/kclient_workunit_misc.yaml} | 3 | |
pass | 3308689 | 2018-12-05 05:21:09 | 2018-12-15 14:19:02 | 2018-12-15 15:07:02 | 0:48:00 | 0:26:52 | 0:21:08 | ovh | master | ubuntu | 16.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/kclient_workunit_o_trunc.yaml} | 3 | |
pass | 3308690 | 2018-12-05 05:21:09 | 2018-12-15 14:21:06 | 2018-12-15 17:11:08 | 2:50:02 | 0:29:35 | 2:20:27 | ovh | master | ubuntu | 16.04 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/client-limits.yaml whitelist_health.yaml} | 6 | |
fail | 3308691 | 2018-12-05 05:21:10 | 2018-12-15 14:21:51 | 2018-12-15 16:09:52 | 1:48:01 | 1:25:23 | 0:22:38 | ovh | master | centos | 7.4 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/kclient_workunit_snaps.yaml} | 3 | |
Failure Reason:
"2018-12-15 15:32:27.031222 mon.a mon.0 158.69.66.28:6789/0 904 : cluster [ERR] Health check failed: mon b is very low on available space (MON_DISK_CRIT)" in cluster log |
||||||||||||||
fail | 3308692 | 2018-12-05 05:21:11 | 2018-12-15 14:23:06 | 2018-12-15 15:53:07 | 1:30:01 | 1:02:42 | 0:27:19 | ovh | master | centos | 7.4 | kcephfs/thrash/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} thrashers/mds.yaml thrashosds-health.yaml whitelist_health.yaml workloads/kclient_workunit_suites_iozone.yaml} | 3 | |
Failure Reason:
"2018-12-15 15:17:04.117057 mon.a mon.0 158.69.64.192:6789/0 603 : cluster [WRN] Health check failed: 1 MDSs report slow metadata IOs (MDS_SLOW_METADATA_IO)" in cluster log |
||||||||||||||
dead | 3308693 | 2018-12-05 05:21:12 | 2018-12-15 14:25:18 | 2018-12-16 02:46:30 | 12:21:12 | ovh | master | ubuntu | 16.04 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/client-recovery.yaml whitelist_health.yaml} | 6 | |||
fail | 3308694 | 2018-12-05 05:21:12 | 2018-12-15 14:29:12 | 2018-12-15 14:53:11 | 0:23:59 | ovh | master | ubuntu | 18.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/kclient_workunit_suites_dbench.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh039.front.sepia.ceph.com |
||||||||||||||
fail | 3308695 | 2018-12-05 05:21:13 | 2018-12-15 14:31:08 | 2018-12-15 15:45:09 | 1:14:01 | 0:58:55 | 0:15:06 | ovh | master | centos | 7.4 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/kclient_workunit_suites_ffsb.yaml} | 3 | |
Failure Reason:
"2018-12-15 15:18:02.478961 mon.b mon.0 158.69.68.198:6789/0 283 : cluster [ERR] Health check failed: mon c is very low on available space (MON_DISK_CRIT)" in cluster log |
||||||||||||||
pass | 3308696 | 2018-12-05 05:21:14 | 2018-12-15 14:33:07 | 2018-12-15 15:49:07 | 1:16:00 | 0:19:53 | 0:56:07 | ovh | master | centos | 7.4 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/config-commands.yaml whitelist_health.yaml} | 6 | |
fail | 3308697 | 2018-12-05 05:21:15 | 2018-12-15 14:40:05 | 2018-12-15 15:46:05 | 1:06:00 | 0:08:41 | 0:57:19 | ovh | master | rhel | 7.5 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/kclient_workunit_suites_fsstress.yaml} | 3 | |
Failure Reason:
Command failed on ovh049 with status 1: '\n sudo yum -y install bison\n ' |
||||||||||||||
fail | 3308698 | 2018-12-05 05:21:15 | 2018-12-15 14:42:38 | 2018-12-15 15:00:37 | 0:17:59 | ovh | master | ubuntu | 18.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/kclient_workunit_suites_fsx.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh054.front.sepia.ceph.com |
||||||||||||||
fail | 3308699 | 2018-12-05 05:21:16 | 2018-12-15 14:42:38 | 2018-12-15 17:36:40 | 2:54:02 | ovh | master | ubuntu | 18.04 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/damage.yaml whitelist_health.yaml} | 6 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh095.front.sepia.ceph.com |
||||||||||||||
fail | 3308700 | 2018-12-05 05:21:17 | 2018-12-15 14:50:18 | 2018-12-15 16:12:19 | 1:22:01 | 0:45:40 | 0:36:21 | ovh | master | centos | 7.4 | kcephfs/thrash/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} thrashers/mon.yaml thrashosds-health.yaml whitelist_health.yaml workloads/kclient_workunit_suites_ffsb.yaml} | 3 | |
Failure Reason:
"2018-12-15 15:37:14.375881 mon.b mon.0 158.69.65.133:6789/0 89 : cluster [WRN] Health check failed: 1 MDSs report slow metadata IOs (MDS_SLOW_METADATA_IO)" in cluster log |
||||||||||||||
fail | 3308701 | 2018-12-05 05:21:18 | 2018-12-15 14:50:54 | 2018-12-15 15:12:53 | 0:21:59 | ovh | master | ubuntu | 18.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/kclient_workunit_suites_fsync.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh022.front.sepia.ceph.com |
||||||||||||||
dead | 3308702 | 2018-12-05 05:21:18 | 2018-12-15 14:53:10 | 2018-12-16 02:55:21 | 12:02:11 | ovh | master | rhel | 7.5 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/data-scan.yaml whitelist_health.yaml} | — | |||
pass | 3308703 | 2018-12-05 05:21:19 | 2018-12-15 14:53:12 | 2018-12-15 15:55:12 | 1:02:00 | 0:32:10 | 0:29:50 | ovh | master | centos | 7.4 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/kclient_workunit_suites_iozone.yaml} | 3 | |
fail | 3308704 | 2018-12-05 05:21:20 | 2018-12-15 14:58:52 | 2018-12-15 15:20:52 | 0:22:00 | ovh | master | ubuntu | 18.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/kclient_workunit_suites_pjd.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh035.front.sepia.ceph.com |
||||||||||||||
fail | 3308705 | 2018-12-05 05:21:21 | 2018-12-15 14:59:08 | 2018-12-15 16:29:08 | 1:30:00 | 0:10:48 | 1:19:12 | ovh | master | centos | 7.4 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/failover.yaml whitelist_health.yaml} | 6 | |
Failure Reason:
Command failed on ovh090 with status 1: '\n sudo yum -y install ceph-radosgw\n ' |
||||||||||||||
fail | 3308706 | 2018-12-05 05:21:21 | 2018-12-15 15:00:50 | 2018-12-15 15:58:50 | 0:58:00 | 0:09:57 | 0:48:03 | ovh | master | rhel | 7.5 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/kclient_workunit_trivial_sync.yaml} | 3 | |
Failure Reason:
Command failed on ovh055 with status 1: '\n sudo yum -y install bison\n ' |
||||||||||||||
fail | 3308707 | 2018-12-05 05:21:22 | 2018-12-15 15:02:50 | 2018-12-15 15:16:50 | 0:14:00 | ovh | master | ubuntu | 18.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/kclient_workunit_direct_io.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh097.front.sepia.ceph.com |
||||||||||||||
fail | 3308708 | 2018-12-05 05:21:23 | 2018-12-15 15:05:00 | 2018-12-15 15:27:00 | 0:22:00 | ovh | master | ubuntu | 18.04 | kcephfs/mixed-clients/{begin.yaml clusters/1-mds-2-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/kernel_cfuse_workunits_untarbuild_blogbench.yaml} | 4 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh088.front.sepia.ceph.com |
||||||||||||||
fail | 3308709 | 2018-12-05 05:21:24 | 2018-12-15 15:07:14 | 2018-12-15 19:27:17 | 4:20:03 | ovh | master | ubuntu | 18.04 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/forward-scrub.yaml whitelist_health.yaml} | 6 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh067.front.sepia.ceph.com |
||||||||||||||
fail | 3308710 | 2018-12-05 05:21:24 | 2018-12-15 15:13:05 | 2018-12-15 16:15:05 | 1:02:00 | 0:08:38 | 0:53:22 | ovh | master | rhel | 7.5 | kcephfs/thrash/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} thrashers/default.yaml thrashosds-health.yaml whitelist_health.yaml workloads/kclient_workunit_suites_iozone.yaml} | 3 | |
Failure Reason:
Command failed on ovh084 with status 1: '\n sudo yum -y install bison\n ' |
||||||||||||||
pass | 3308711 | 2018-12-05 05:21:25 | 2018-12-15 15:13:08 | 2018-12-15 16:41:09 | 1:28:01 | 1:15:50 | 0:12:11 | ovh | master | ubuntu | 16.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/kclient_workunit_kernel_untar_build.yaml} | 3 | |
fail | 3308712 | 2018-12-05 05:21:26 | 2018-12-15 15:17:02 | 2018-12-15 18:45:04 | 3:28:02 | ovh | master | ubuntu | 18.04 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/journal-repair.yaml whitelist_health.yaml} | 6 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh036.front.sepia.ceph.com |
||||||||||||||
pass | 3308713 | 2018-12-05 05:21:27 | 2018-12-15 15:17:02 | 2018-12-15 16:37:02 | 1:20:00 | 0:55:25 | 0:24:35 | ovh | master | ubuntu | 16.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/kclient_workunit_misc.yaml} | 3 | |
fail | 3308714 | 2018-12-05 05:21:28 | 2018-12-15 15:17:51 | 2018-12-15 15:39:51 | 0:22:00 | ovh | master | ubuntu | 18.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/kclient_workunit_o_trunc.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh042.front.sepia.ceph.com |
||||||||||||||
fail | 3308715 | 2018-12-05 05:21:28 | 2018-12-15 15:21:04 | 2018-12-16 00:19:12 | 8:58:08 | 0:11:13 | 8:46:55 | ovh | master | rhel | 7.5 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/mds-flush.yaml whitelist_health.yaml} | 6 | |
Failure Reason:
Command failed on ovh055 with status 1: '\n sudo yum -y install bison\n ' |
||||||||||||||
pass | 3308716 | 2018-12-05 05:21:29 | 2018-12-15 15:23:02 | 2018-12-15 17:13:03 | 1:50:01 | 1:16:43 | 0:33:18 | ovh | master | centos | 7.4 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/kclient_workunit_snaps.yaml} | 3 | |
fail | 3308717 | 2018-12-05 05:21:30 | 2018-12-15 15:26:53 | 2018-12-15 16:28:53 | 1:02:00 | 0:50:23 | 0:11:37 | ovh | master | centos | 7.4 | kcephfs/thrash/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} thrashers/mds.yaml thrashosds-health.yaml whitelist_health.yaml workloads/kclient_workunit_suites_ffsb.yaml} | 3 | |
Failure Reason:
Command failed (workunit test suites/ffsb.sh) on ovh006 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=802ee2380274f15db187ddd1219533c6b233da6a TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/ffsb.sh' |
||||||||||||||
fail | 3308718 | 2018-12-05 05:21:31 | 2018-12-15 15:27:01 | 2018-12-15 20:31:05 | 5:04:04 | 0:10:16 | 4:53:48 | ovh | master | rhel | 7.5 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/mds-full.yaml whitelist_health.yaml} | 6 | |
Failure Reason:
Command failed on ovh082 with status 1: '\n sudo yum -y install bison\n ' |
||||||||||||||
pass | 3308719 | 2018-12-05 05:21:32 | 2018-12-15 15:38:57 | 2018-12-15 16:46:57 | 1:08:00 | 0:43:10 | 0:24:50 | ovh | master | ubuntu | 16.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/kclient_workunit_suites_dbench.yaml} | 3 | |
fail | 3308720 | 2018-12-05 05:21:32 | 2018-12-15 15:39:52 | 2018-12-15 17:15:53 | 1:36:01 | 1:24:00 | 0:12:01 | ovh | master | ubuntu | 16.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/kclient_workunit_suites_ffsb.yaml} | 3 | |
Failure Reason:
"2018-12-15 16:11:45.513788 mon.a mon.0 158.69.70.86:6789/0 164 : cluster [WRN] Health check failed: 1 MDSs report slow metadata IOs (MDS_SLOW_METADATA_IO)" in cluster log |
||||||||||||||
fail | 3308721 | 2018-12-05 05:21:33 | 2018-12-15 15:43:09 | 2018-12-15 17:53:11 | 2:10:02 | 0:10:41 | 1:59:21 | ovh | master | rhel | 7.5 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/pool-perm.yaml whitelist_health.yaml} | 6 | |
Failure Reason:
Command failed on ovh026 with status 1: '\n sudo yum -y install bison\n ' |
||||||||||||||
fail | 3308722 | 2018-12-05 05:21:34 | 2018-12-15 15:43:20 | 2018-12-15 16:43:20 | 1:00:00 | 0:08:15 | 0:51:45 | ovh | master | rhel | 7.5 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/kclient_workunit_suites_fsstress.yaml} | 3 | |
Failure Reason:
Command failed on ovh075 with status 1: '\n sudo yum -y install bison\n ' |
||||||||||||||
fail | 3308723 | 2018-12-05 05:21:35 | 2018-12-15 15:45:21 | 2018-12-15 16:19:21 | 0:34:00 | ovh | master | ubuntu | 18.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/kclient_workunit_suites_fsx.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh049.front.sepia.ceph.com |
||||||||||||||
fail | 3308724 | 2018-12-05 05:21:35 | 2018-12-15 15:46:06 | 2018-12-15 21:32:11 | 5:46:05 | 0:11:23 | 5:34:42 | ovh | master | rhel | 7.5 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/sessionmap.yaml whitelist_health.yaml} | 6 | |
Failure Reason:
Command failed on ovh078 with status 1: '\n sudo yum -y install bison\n ' |
||||||||||||||
fail | 3308725 | 2018-12-05 05:21:36 | 2018-12-15 15:49:19 | 2018-12-15 16:13:19 | 0:24:00 | ovh | master | ubuntu | 18.04 | kcephfs/thrash/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} thrashers/mon.yaml thrashosds-health.yaml whitelist_health.yaml workloads/kclient_workunit_suites_iozone.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh012.front.sepia.ceph.com |
||||||||||||||
fail | 3308726 | 2018-12-05 05:21:37 | 2018-12-15 15:51:10 | 2018-12-15 16:53:10 | 1:02:00 | 0:08:18 | 0:53:42 | ovh | master | rhel | 7.5 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/kclient_workunit_suites_fsync.yaml} | 3 | |
Failure Reason:
Command failed on ovh032 with status 1: '\n sudo yum -y install bison\n ' |
||||||||||||||
fail | 3308727 | 2018-12-05 05:21:38 | 2018-12-15 15:53:14 | 2018-12-15 18:21:16 | 2:28:02 | 0:10:21 | 2:17:41 | ovh | master | rhel | 7.5 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/strays.yaml whitelist_health.yaml} | 6 | |
Failure Reason:
Command failed on ovh022 with status 1: '\n sudo yum -y install bison\n ' |
||||||||||||||
pass | 3308728 | 2018-12-05 05:21:38 | 2018-12-15 15:53:14 | 2018-12-15 16:47:14 | 0:54:00 | 0:32:43 | 0:21:17 | ovh | master | ubuntu | 16.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/kclient_workunit_suites_iozone.yaml} | 3 | |
fail | 3308729 | 2018-12-05 05:21:39 | 2018-12-15 15:55:14 | 2018-12-15 16:11:13 | 0:15:59 | ovh | master | ubuntu | 18.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/kclient_workunit_suites_pjd.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh048.front.sepia.ceph.com |
||||||||||||||
pass | 3308730 | 2018-12-05 05:21:40 | 2018-12-15 15:59:02 | 2018-12-15 17:27:03 | 1:28:01 | 0:41:43 | 0:46:18 | ovh | master | centos | 7.4 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/volume-client.yaml whitelist_health.yaml} | 6 | |
fail | 3308731 | 2018-12-05 05:21:41 | 2018-12-15 16:01:08 | 2018-12-15 16:21:07 | 0:19:59 | ovh | master | ubuntu | 18.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/kclient_workunit_trivial_sync.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh012.front.sepia.ceph.com |
||||||||||||||
fail | 3308732 | 2018-12-05 05:21:41 | 2018-12-15 16:07:12 | 2018-12-15 16:27:11 | 0:19:59 | ovh | master | ubuntu | 18.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/kclient_workunit_direct_io.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh022.front.sepia.ceph.com |
||||||||||||||
fail | 3308733 | 2018-12-05 05:21:42 | 2018-12-15 16:08:46 | 2018-12-15 17:28:46 | 1:20:00 | 0:09:12 | 1:10:48 | ovh | master | rhel | 7.5 | kcephfs/mixed-clients/{begin.yaml clusters/1-mds-2-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/kernel_cfuse_workunits_dbench_iozone.yaml} | 4 | |
Failure Reason:
Command failed on ovh030 with status 1: '\n sudo yum -y install bison\n ' |
||||||||||||||
pass | 3308734 | 2018-12-05 05:21:43 | 2018-12-15 16:10:05 | 2018-12-15 19:46:07 | 3:36:02 | 0:21:38 | 3:14:24 | ovh | master | ubuntu | 16.04 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/auto-repair.yaml whitelist_health.yaml} | 6 | |
pass | 3308735 | 2018-12-05 05:21:44 | 2018-12-15 16:11:26 | 2018-12-15 17:33:26 | 1:22:00 | 1:02:01 | 0:19:59 | ovh | master | ubuntu | 16.04 | kcephfs/thrash/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} thrashers/default.yaml thrashosds-health.yaml whitelist_health.yaml workloads/kclient_workunit_suites_ffsb.yaml} | 3 | |
pass | 3308736 | 2018-12-05 05:21:44 | 2018-12-15 16:12:20 | 2018-12-15 17:44:20 | 1:32:00 | 0:55:09 | 0:36:51 | ovh | master | centos | 7.4 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/kclient_workunit_kernel_untar_build.yaml} | 3 | |
pass | 3308737 | 2018-12-05 05:21:45 | 2018-12-15 16:13:20 | 2018-12-15 19:29:23 | 3:16:03 | 0:20:49 | 2:55:14 | ovh | master | centos | 7.4 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/backtrace.yaml whitelist_health.yaml} | 6 | |
fail | 3308738 | 2018-12-05 05:21:46 | 2018-12-15 16:15:17 | 2018-12-15 16:33:16 | 0:17:59 | ovh | master | ubuntu | 18.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/kclient_workunit_misc.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh029.front.sepia.ceph.com |
||||||||||||||
fail | 3308739 | 2018-12-05 05:21:47 | 2018-12-15 16:19:32 | 2018-12-15 16:57:31 | 0:37:59 | ovh | master | ubuntu | 18.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/kclient_workunit_o_trunc.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh090.front.sepia.ceph.com |
||||||||||||||
pass | 3308740 | 2018-12-05 05:21:48 | 2018-12-15 16:21:09 | 2018-12-15 18:55:10 | 2:34:01 | 0:31:11 | 2:02:50 | ovh | master | centos | 7.4 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/client-limits.yaml whitelist_health.yaml} | 6 | |
fail | 3308741 | 2018-12-05 05:21:48 | 2018-12-15 16:27:13 | 2018-12-15 17:57:14 | 1:30:01 | 1:12:52 | 0:17:09 | ovh | master | centos | 7.4 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/kclient_workunit_snaps.yaml} | 3 | |
Failure Reason:
"2018-12-15 16:53:59.114300 mon.a mon.0 158.69.66.87:6789/0 225 : cluster [WRN] Health check failed: 1 slow ops, oldest one blocked for 124 sec, mon.c has slow ops (SLOW_OPS)" in cluster log |
||||||||||||||
fail | 3308742 | 2018-12-05 05:21:49 | 2018-12-15 16:29:05 | 2018-12-15 17:37:05 | 1:08:00 | 0:08:08 | 0:59:52 | ovh | master | rhel | 7.5 | kcephfs/thrash/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} thrashers/mds.yaml thrashosds-health.yaml whitelist_health.yaml workloads/kclient_workunit_suites_iozone.yaml} | 3 | |
Failure Reason:
Command failed on ovh044 with status 1: '\n sudo yum -y install bison\n ' |
||||||||||||||
fail | 3308743 | 2018-12-05 05:21:50 | 2018-12-15 16:29:09 | 2018-12-16 03:37:19 | 11:08:10 | 0:11:17 | 10:56:53 | ovh | master | rhel | 7.5 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/client-recovery.yaml whitelist_health.yaml} | 6 | |
Failure Reason:
Command failed on ovh049 with status 1: '\n sudo yum -y install bison\n ' |
||||||||||||||
fail | 3308744 | 2018-12-05 05:21:51 | 2018-12-15 16:33:28 | 2018-12-15 16:57:28 | 0:24:00 | ovh | master | ubuntu | 18.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/kclient_workunit_suites_dbench.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh084.front.sepia.ceph.com |
||||||||||||||
fail | 3308745 | 2018-12-05 05:21:51 | 2018-12-15 16:37:13 | 2018-12-15 17:39:13 | 1:02:00 | 0:46:30 | 0:15:30 | ovh | master | ubuntu | 16.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/kclient_workunit_suites_ffsb.yaml} | 3 | |
Failure Reason:
"2018-12-15 17:12:59.520232 mon.b mon.0 158.69.68.81:6789/0 141 : cluster [WRN] Health check failed: 1 MDSs report slow metadata IOs (MDS_SLOW_METADATA_IO)" in cluster log |
||||||||||||||
pass | 3308746 | 2018-12-05 05:21:52 | 2018-12-15 16:41:21 | 2018-12-15 17:57:21 | 1:16:00 | 0:21:50 | 0:54:10 | ovh | master | centos | 7.4 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/config-commands.yaml whitelist_health.yaml} | 6 | |
pass | 3308747 | 2018-12-05 05:21:53 | 2018-12-15 16:43:22 | 2018-12-15 17:33:21 | 0:49:59 | 0:27:23 | 0:22:36 | ovh | master | ubuntu | 16.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/kclient_workunit_suites_fsstress.yaml} | 3 | |
fail | 3308748 | 2018-12-05 05:21:54 | 2018-12-15 16:46:59 | 2018-12-15 17:40:59 | 0:54:00 | 0:07:56 | 0:46:04 | ovh | master | rhel | 7.5 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/kclient_workunit_suites_fsx.yaml} | 3 | |
Failure Reason:
Command failed on ovh009 with status 1: '\n sudo yum -y install bison\n ' |
||||||||||||||
pass | 3308749 | 2018-12-05 05:21:54 | 2018-12-15 16:47:15 | 2018-12-15 18:47:16 | 2:00:01 | 0:34:03 | 1:25:58 | ovh | master | centos | 7.4 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/damage.yaml whitelist_health.yaml} | 6 | |
pass | 3308750 | 2018-12-05 05:21:55 | 2018-12-15 16:53:13 | 2018-12-15 18:07:13 | 1:14:00 | 0:35:32 | 0:38:28 | ovh | master | ubuntu | 16.04 | kcephfs/thrash/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} thrashers/mon.yaml thrashosds-health.yaml whitelist_health.yaml workloads/kclient_workunit_suites_ffsb.yaml} | 3 | |
fail | 3308751 | 2018-12-05 05:21:56 | 2018-12-15 16:56:54 | 2018-12-15 18:00:55 | 1:04:01 | 0:08:25 | 0:55:36 | ovh | master | rhel | 7.5 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/kclient_workunit_suites_fsync.yaml} | 3 | |
Failure Reason:
Command failed on ovh011 with status 1: '\n sudo yum -y install bison\n ' |
||||||||||||||
fail | 3308752 | 2018-12-05 05:21:56 | 2018-12-15 16:57:29 | 2018-12-15 22:17:33 | 5:20:04 | 0:11:36 | 5:08:28 | ovh | master | rhel | 7.5 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/data-scan.yaml whitelist_health.yaml} | 6 | |
Failure Reason:
Command failed on ovh061 with status 1: '\n sudo yum -y install bison\n ' |
||||||||||||||
fail | 3308753 | 2018-12-05 05:21:57 | 2018-12-15 16:57:33 | 2018-12-15 18:09:33 | 1:12:00 | 0:49:43 | 0:22:17 | ovh | master | ubuntu | 16.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/kclient_workunit_suites_iozone.yaml} | 3 | |
Failure Reason:
"2018-12-15 17:48:59.092317 mon.b mon.0 158.69.71.125:6789/0 155 : cluster [WRN] Health check failed: Degraded data redundancy: 284/10286 objects degraded (2.761%), 2 pgs degraded (PG_DEGRADED)" in cluster log |
||||||||||||||
fail | 3308754 | 2018-12-05 05:21:58 | 2018-12-15 17:09:08 | 2018-12-15 17:27:08 | 0:18:00 | ovh | master | ubuntu | 18.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/kclient_workunit_suites_pjd.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh004.front.sepia.ceph.com |
||||||||||||||
fail | 3308755 | 2018-12-05 05:21:59 | 2018-12-15 17:11:20 | 2018-12-15 19:01:20 | 1:50:00 | 0:10:33 | 1:39:27 | ovh | master | rhel | 7.5 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/failover.yaml whitelist_health.yaml} | 6 | |
Failure Reason:
Command failed on ovh038 with status 1: '\n sudo yum -y install bison\n ' |
||||||||||||||
pass | 3308756 | 2018-12-05 05:22:00 | 2018-12-15 17:13:06 | 2018-12-15 17:59:06 | 0:46:00 | 0:15:22 | 0:30:38 | ovh | master | centos | 7.4 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/kclient_workunit_trivial_sync.yaml} | 3 | |
fail | 3308757 | 2018-12-05 05:22:00 | 2018-12-15 17:14:37 | 2018-12-15 17:34:37 | 0:20:00 | ovh | master | ubuntu | 18.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/kclient_workunit_direct_io.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh040.front.sepia.ceph.com |
||||||||||||||
pass | 3308758 | 2018-12-05 05:22:01 | 2018-12-15 17:16:05 | 2018-12-15 19:08:06 | 1:52:01 | 1:24:36 | 0:27:25 | ovh | master | ubuntu | 16.04 | kcephfs/mixed-clients/{begin.yaml clusters/1-mds-2-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/kernel_cfuse_workunits_untarbuild_blogbench.yaml} | 4 | |
dead | 3308759 | 2018-12-05 05:22:02 | 2018-12-15 17:27:15 | 2018-12-16 05:29:25 | 12:02:10 | ovh | master | ubuntu | 16.04 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/forward-scrub.yaml whitelist_health.yaml} | — | |||
pass | 3308760 | 2018-12-05 05:22:03 | 2018-12-15 17:27:15 | 2018-12-15 18:41:15 | 1:14:00 | 0:42:12 | 0:31:48 | ovh | master | ubuntu | 16.04 | kcephfs/thrash/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} thrashers/default.yaml thrashosds-health.yaml whitelist_health.yaml workloads/kclient_workunit_suites_iozone.yaml} | 3 | |
fail | 3308761 | 2018-12-05 05:22:03 | 2018-12-15 17:28:58 | 2018-12-15 17:42:57 | 0:13:59 | ovh | master | ubuntu | 18.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/kclient_workunit_kernel_untar_build.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh045.front.sepia.ceph.com |
||||||||||||||
fail | 3308762 | 2018-12-05 05:22:04 | 2018-12-15 17:33:34 | 2018-12-15 18:43:34 | 1:10:00 | 0:26:47 | 0:43:13 | ovh | master | ubuntu | 16.04 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/journal-repair.yaml whitelist_health.yaml} | 6 | |
Failure Reason:
Test failure: test_reset (tasks.cephfs.test_journal_repair.TestJournalRepair), test_reset (tasks.cephfs.test_journal_repair.TestJournalRepair) |
||||||||||||||
fail | 3308763 | 2018-12-05 05:22:05 | 2018-12-15 17:33:34 | 2018-12-15 17:51:33 | 0:17:59 | ovh | master | ubuntu | 18.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/kclient_workunit_misc.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh016.front.sepia.ceph.com |
||||||||||||||
fail | 3308764 | 2018-12-05 05:22:05 | 2018-12-15 17:34:49 | 2018-12-15 17:54:48 | 0:19:59 | ovh | master | ubuntu | 18.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/kclient_workunit_o_trunc.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh083.front.sepia.ceph.com |
||||||||||||||
dead | 3308765 | 2018-12-05 05:22:06 | 2018-12-15 17:36:52 | 2018-12-16 05:39:04 | 12:02:12 | ovh | master | rhel | 7.5 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/mds-flush.yaml whitelist_health.yaml} | — | |||
fail | 3308766 | 2018-12-05 05:22:07 | 2018-12-15 17:37:06 | 2018-12-15 18:35:06 | 0:58:00 | 0:08:18 | 0:49:42 | ovh | master | rhel | 7.5 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/kclient_workunit_snaps.yaml} | 3 | |
Failure Reason:
Command failed on ovh029 with status 1: '\n sudo yum -y install bison\n ' |
||||||||||||||
fail | 3308767 | 2018-12-05 05:22:08 | 2018-12-15 17:39:26 | 2018-12-15 18:21:25 | 0:41:59 | 0:34:07 | 0:07:52 | ovh | master | ubuntu | 16.04 | kcephfs/thrash/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} thrashers/mds.yaml thrashosds-health.yaml whitelist_health.yaml workloads/kclient_workunit_suites_ffsb.yaml} | 3 | |
Failure Reason:
Command failed (workunit test suites/ffsb.sh) on ovh052 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=802ee2380274f15db187ddd1219533c6b233da6a TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/ffsb.sh' |
||||||||||||||
pass | 3308768 | 2018-12-05 05:22:08 | 2018-12-15 17:41:11 | 2018-12-15 20:07:13 | 2:26:02 | 0:37:55 | 1:48:07 | ovh | master | centos | 7.4 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/mds-full.yaml whitelist_health.yaml} | 6 | |
pass | 3308769 | 2018-12-05 05:22:09 | 2018-12-15 17:43:09 | 2018-12-15 18:45:09 | 1:02:00 | 0:46:11 | 0:15:49 | ovh | master | centos | 7.4 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/kclient_workunit_suites_dbench.yaml} | 3 | |
fail | 3308770 | 2018-12-05 05:22:10 | 2018-12-15 17:44:32 | 2018-12-15 18:52:32 | 1:08:00 | 0:08:51 | 0:59:09 | ovh | master | rhel | 7.5 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/kclient_workunit_suites_ffsb.yaml} | 3 | |
Failure Reason:
Command failed on ovh089 with status 1: '\n sudo yum -y install bison\n ' |
||||||||||||||
fail | 3308771 | 2018-12-05 05:22:10 | 2018-12-15 17:45:05 | 2018-12-15 19:43:06 | 1:58:01 | ovh | master | ubuntu | 18.04 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/pool-perm.yaml whitelist_health.yaml} | 6 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh011.front.sepia.ceph.com |
||||||||||||||
fail | 3308772 | 2018-12-05 05:22:11 | 2018-12-15 17:51:35 | 2018-12-15 18:09:34 | 0:17:59 | ovh | master | ubuntu | 18.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/kclient_workunit_suites_fsstress.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh002.front.sepia.ceph.com |
||||||||||||||
pass | 3308773 | 2018-12-05 05:22:12 | 2018-12-15 17:53:20 | 2018-12-15 19:01:20 | 1:08:00 | 0:46:17 | 0:21:43 | ovh | master | ubuntu | 16.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/kclient_workunit_suites_fsx.yaml} | 3 | |
fail | 3308774 | 2018-12-05 05:22:13 | 2018-12-15 17:55:00 | 2018-12-15 20:35:02 | 2:40:02 | 0:10:40 | 2:29:22 | ovh | master | rhel | 7.5 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/sessionmap.yaml whitelist_health.yaml} | 6 | |
Failure Reason:
Command failed on ovh055 with status 1: '\n sudo yum -y install bison\n ' |
||||||||||||||
pass | 3308775 | 2018-12-05 05:22:13 | 2018-12-15 17:57:17 | 2018-12-15 18:45:17 | 0:48:00 | 0:33:03 | 0:14:57 | ovh | master | centos | 7.4 | kcephfs/thrash/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} thrashers/mon.yaml thrashosds-health.yaml whitelist_health.yaml workloads/kclient_workunit_suites_iozone.yaml} | 3 | |
pass | 3308776 | 2018-12-05 05:22:14 | 2018-12-15 17:57:23 | 2018-12-15 18:31:22 | 0:33:59 | 0:17:10 | 0:16:49 | ovh | master | centos | 7.4 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/kclient_workunit_suites_fsync.yaml} | 3 | |
pass | 3308777 | 2018-12-05 05:22:15 | 2018-12-15 17:59:18 | 2018-12-16 00:49:23 | 6:50:05 | 0:42:13 | 6:07:52 | ovh | master | ubuntu | 16.04 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/strays.yaml whitelist_health.yaml} | 6 | |
pass | 3308778 | 2018-12-05 05:22:16 | 2018-12-15 18:01:07 | 2018-12-15 19:07:07 | 1:06:00 | 0:33:11 | 0:32:49 | ovh | master | ubuntu | 16.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/kclient_workunit_suites_iozone.yaml} | 3 | |
pass | 3308779 | 2018-12-05 05:22:16 | 2018-12-15 18:07:15 | 2018-12-15 18:49:14 | 0:41:59 | 0:18:05 | 0:23:54 | ovh | master | centos | 7.4 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/kclient_workunit_suites_pjd.yaml} | 3 | |
fail | 3308780 | 2018-12-05 05:22:17 | 2018-12-15 18:09:35 | 2018-12-15 19:59:36 | 1:50:01 | ovh | master | ubuntu | 18.04 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/volume-client.yaml whitelist_health.yaml} | 6 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh056.front.sepia.ceph.com |
||||||||||||||
fail | 3308781 | 2018-12-05 05:22:18 | 2018-12-15 18:09:36 | 2018-12-15 19:15:36 | 1:06:00 | 0:08:21 | 0:57:39 | ovh | master | rhel | 7.5 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/kclient_workunit_trivial_sync.yaml} | 3 | |
Failure Reason:
Command failed on ovh072 with status 1: '\n sudo yum -y install bison\n ' |
||||||||||||||
fail | 3308782 | 2018-12-05 05:22:19 | 2018-12-15 18:11:07 | 2018-12-15 19:15:07 | 1:04:00 | 0:08:34 | 0:55:26 | ovh | master | rhel | 7.5 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/kclient_workunit_direct_io.yaml} | 3 | |
Failure Reason:
Command failed on ovh091 with status 1: '\n sudo yum -y install bison\n ' |
||||||||||||||
pass | 3308783 | 2018-12-05 05:22:20 | 2018-12-15 18:20:48 | 2018-12-15 19:48:49 | 1:28:01 | 0:46:44 | 0:41:17 | ovh | master | centos | 7.4 | kcephfs/mixed-clients/{begin.yaml clusters/1-mds-2-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/kernel_cfuse_workunits_dbench_iozone.yaml} | 4 | |
fail | 3308784 | 2018-12-05 05:22:20 | 2018-12-15 18:21:17 | 2018-12-15 20:35:18 | 2:14:01 | ovh | master | ubuntu | 18.04 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/auto-repair.yaml whitelist_health.yaml} | 6 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh031.front.sepia.ceph.com |
||||||||||||||
fail | 3308785 | 2018-12-05 05:22:21 | 2018-12-15 18:21:27 | 2018-12-15 18:35:26 | 0:13:59 | ovh | master | ubuntu | 18.04 | kcephfs/thrash/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} thrashers/default.yaml thrashosds-health.yaml whitelist_health.yaml workloads/kclient_workunit_suites_ffsb.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh052.front.sepia.ceph.com |
||||||||||||||
fail | 3308786 | 2018-12-05 05:22:22 | 2018-12-15 18:23:18 | 2018-12-15 18:43:17 | 0:19:59 | ovh | master | ubuntu | 18.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/kclient_workunit_kernel_untar_build.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh026.front.sepia.ceph.com |
||||||||||||||
fail | 3308787 | 2018-12-05 05:22:23 | 2018-12-15 18:31:34 | 2018-12-15 20:53:36 | 2:22:02 | 0:11:59 | 2:10:03 | ovh | master | rhel | 7.5 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/backtrace.yaml whitelist_health.yaml} | 6 | |
Failure Reason:
Command failed on ovh065 with status 1: '\n sudo yum -y install bison\n ' |
||||||||||||||
fail | 3308788 | 2018-12-05 05:22:23 | 2018-12-15 18:35:18 | 2018-12-15 18:53:18 | 0:18:00 | ovh | master | ubuntu | 18.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/kclient_workunit_misc.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh095.front.sepia.ceph.com |
||||||||||||||
fail | 3308789 | 2018-12-05 05:22:24 | 2018-12-15 18:35:27 | 2018-12-15 18:51:26 | 0:15:59 | ovh | master | ubuntu | 18.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/kclient_workunit_o_trunc.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh074.front.sepia.ceph.com |
||||||||||||||
fail | 3308790 | 2018-12-05 05:22:25 | 2018-12-15 18:41:28 | 2018-12-15 22:27:30 | 3:46:02 | 0:10:36 | 3:35:26 | ovh | master | rhel | 7.5 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/client-limits.yaml whitelist_health.yaml} | 6 | |
Failure Reason:
Command failed on ovh069 with status 1: '\n sudo yum -y install bison\n ' |
||||||||||||||
fail | 3308791 | 2018-12-05 05:22:26 | 2018-12-15 18:43:30 | 2018-12-15 20:35:31 | 1:52:01 | 1:42:47 | 0:09:14 | ovh | master | ubuntu | 16.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/kclient_workunit_snaps.yaml} | 3 | |
Failure Reason:
"2018-12-15 19:07:21.616145 mon.a mon.0 158.69.68.90:6789/0 220 : cluster [WRN] Health check failed: 1 slow ops, oldest one blocked for 124 sec, mon.c has slow ops (SLOW_OPS)" in cluster log |
||||||||||||||
fail | 3308792 | 2018-12-05 05:22:26 | 2018-12-15 18:43:35 | 2018-12-15 19:29:35 | 0:46:00 | 0:08:04 | 0:37:56 | ovh | master | rhel | 7.5 | kcephfs/thrash/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} thrashers/mds.yaml thrashosds-health.yaml whitelist_health.yaml workloads/kclient_workunit_suites_iozone.yaml} | 3 | |
Failure Reason:
Command failed on ovh012 with status 1: '\n sudo yum -y install bison\n ' |
||||||||||||||
dead | 3308793 | 2018-12-05 05:22:27 | 2018-12-15 18:45:17 | 2018-12-16 06:52:36 | 12:07:19 | ovh | master | ubuntu | 16.04 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/client-recovery.yaml whitelist_health.yaml} | 6 | |||
fail | 3308794 | 2018-12-05 05:22:28 | 2018-12-15 18:45:17 | 2018-12-15 19:05:16 | 0:19:59 | ovh | master | ubuntu | 18.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/kclient_workunit_suites_dbench.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh051.front.sepia.ceph.com |
||||||||||||||
fail | 3308795 | 2018-12-05 05:22:29 | 2018-12-15 18:45:19 | 2018-12-15 19:47:18 | 1:01:59 | 0:09:44 | 0:52:15 | ovh | master | rhel | 7.5 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/kclient_workunit_suites_ffsb.yaml} | 3 | |
Failure Reason:
Command failed on ovh088 with status 1: '\n sudo yum -y install bison\n ' |
||||||||||||||
dead | 3308796 | 2018-12-05 05:22:30 | 2018-12-15 18:47:21 | 2018-12-16 06:49:32 | 12:02:11 | ovh | master | rhel | 7.5 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/config-commands.yaml whitelist_health.yaml} | — | |||
pass | 3308797 | 2018-12-05 05:22:30 | 2018-12-15 18:49:23 | 2018-12-15 19:33:22 | 0:43:59 | 0:29:26 | 0:14:33 | ovh | master | ubuntu | 16.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/kclient_workunit_suites_fsstress.yaml} | 3 | |
pass | 3308798 | 2018-12-05 05:22:31 | 2018-12-15 18:51:28 | 2018-12-15 19:57:28 | 1:06:00 | 0:51:34 | 0:14:26 | ovh | master | ubuntu | 16.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/kclient_workunit_suites_fsx.yaml} | 3 | |
pass | 3308799 | 2018-12-05 05:22:32 | 2018-12-15 18:52:44 | 2018-12-16 00:10:49 | 5:18:05 | 0:33:56 | 4:44:09 | ovh | master | centos | 7.4 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/damage.yaml whitelist_health.yaml} | 6 | |
dead | 3308800 | 2018-12-05 05:22:33 | 2018-12-15 18:53:19 | 2018-12-16 07:00:39 | 12:07:20 | ovh | master | centos | 7.4 | kcephfs/thrash/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} thrashers/mon.yaml thrashosds-health.yaml whitelist_health.yaml workloads/kclient_workunit_suites_ffsb.yaml} | 3 | |||
fail | 3308801 | 2018-12-05 05:22:34 | 2018-12-15 18:55:23 | 2018-12-15 19:13:22 | 0:17:59 | ovh | master | ubuntu | 18.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/kclient_workunit_suites_fsync.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh026.front.sepia.ceph.com |
||||||||||||||
fail | 3308802 | 2018-12-05 05:22:34 | 2018-12-15 19:01:32 | 2018-12-16 02:13:38 | 7:12:06 | ovh | master | ubuntu | 18.04 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/data-scan.yaml whitelist_health.yaml} | 6 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh026.front.sepia.ceph.com |
||||||||||||||
fail | 3308803 | 2018-12-05 05:22:35 | 2018-12-15 19:01:32 | 2018-12-15 20:01:32 | 1:00:00 | 0:08:07 | 0:51:53 | ovh | master | rhel | 7.5 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/kclient_workunit_suites_iozone.yaml} | 3 | |
Failure Reason:
Command failed on ovh093 with status 1: '\n sudo yum -y install bison\n ' |
||||||||||||||
pass | 3308804 | 2018-12-05 05:22:36 | 2018-12-15 19:05:13 | 2018-12-15 19:45:13 | 0:40:00 | 0:16:58 | 0:23:02 | ovh | master | ubuntu | 16.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/kclient_workunit_suites_pjd.yaml} | 3 | |
fail | 3308805 | 2018-12-05 05:22:36 | 2018-12-15 19:05:17 | 2018-12-15 21:15:18 | 2:10:01 | ovh | master | ubuntu | 18.04 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/failover.yaml whitelist_health.yaml} | 6 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh014.front.sepia.ceph.com |
||||||||||||||
fail | 3308806 | 2018-12-05 05:22:37 | 2018-12-15 19:07:19 | 2018-12-15 20:17:19 | 1:10:00 | 0:09:54 | 1:00:06 | ovh | master | rhel | 7.5 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/kclient_workunit_trivial_sync.yaml} | 3 | |
Failure Reason:
Command failed on ovh033 with status 1: '\n sudo yum -y install bison\n ' |
||||||||||||||
pass | 3308807 | 2018-12-05 05:22:38 | 2018-12-15 19:08:08 | 2018-12-15 19:58:07 | 0:49:59 | 0:16:32 | 0:33:27 | ovh | master | ubuntu | 16.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/kclient_workunit_direct_io.yaml} | 3 | |
pass | 3308808 | 2018-12-05 05:22:39 | 2018-12-15 19:13:33 | 2018-12-15 21:03:34 | 1:50:01 | 1:16:04 | 0:33:57 | ovh | master | centos | 7.4 | kcephfs/mixed-clients/{begin.yaml clusters/1-mds-2-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/kernel_cfuse_workunits_untarbuild_blogbench.yaml} | 4 | |
pass | 3308809 | 2018-12-05 05:22:39 | 2018-12-15 19:15:20 | 2018-12-15 22:45:22 | 3:30:02 | 0:24:21 | 3:05:41 | ovh | master | ubuntu | 16.04 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/forward-scrub.yaml whitelist_health.yaml} | 6 | |
fail | 3308810 | 2018-12-05 05:22:40 | 2018-12-15 19:15:37 | 2018-12-15 20:15:37 | 1:00:00 | 0:07:52 | 0:52:08 | ovh | master | rhel | 7.5 | kcephfs/thrash/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} thrashers/default.yaml thrashosds-health.yaml whitelist_health.yaml workloads/kclient_workunit_suites_iozone.yaml} | 3 | |
Failure Reason:
Command failed on ovh003 with status 1: '\n sudo yum -y install bison\n ' |
||||||||||||||
pass | 3308811 | 2018-12-05 05:22:41 | 2018-12-15 19:27:19 | 2018-12-15 20:55:19 | 1:28:00 | 1:01:59 | 0:26:01 | ovh | master | centos | 7.4 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/kclient_workunit_kernel_untar_build.yaml} | 3 | |
dead | 3308812 | 2018-12-05 05:22:42 | 2018-12-15 19:29:24 | 2018-12-16 07:31:35 | 12:02:11 | ovh | master | ubuntu | 16.04 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/journal-repair.yaml whitelist_health.yaml} | — | |||
pass | 3308813 | 2018-12-05 05:22:42 | 2018-12-15 19:29:36 | 2018-12-15 20:41:36 | 1:12:00 | 0:50:50 | 0:21:10 | ovh | master | centos | 7.4 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/kclient_workunit_misc.yaml} | 3 | |
fail | 3308814 | 2018-12-05 05:22:43 | 2018-12-15 19:33:34 | 2018-12-15 20:29:34 | 0:56:00 | 0:08:39 | 0:47:21 | ovh | master | rhel | 7.5 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/kclient_workunit_o_trunc.yaml} | 3 | |
Failure Reason:
Command failed on ovh044 with status 1: '\n sudo yum -y install bison\n ' |
||||||||||||||
fail | 3308815 | 2018-12-05 05:22:44 | 2018-12-15 19:43:14 | 2018-12-15 20:55:14 | 1:12:00 | ovh | master | ubuntu | 18.04 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/mds-flush.yaml whitelist_health.yaml} | 6 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh036.front.sepia.ceph.com |
||||||||||||||
fail | 3308816 | 2018-12-05 05:22:45 | 2018-12-15 19:45:25 | 2018-12-15 20:13:25 | 0:28:00 | ovh | master | ubuntu | 18.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/kclient_workunit_snaps.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh027.front.sepia.ceph.com |
||||||||||||||
fail | 3308817 | 2018-12-05 05:22:45 | 2018-12-15 19:46:08 | 2018-12-15 20:58:09 | 1:12:01 | 0:08:55 | 1:03:06 | ovh | master | rhel | 7.5 | kcephfs/thrash/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} thrashers/mds.yaml thrashosds-health.yaml whitelist_health.yaml workloads/kclient_workunit_suites_ffsb.yaml} | 3 | |
Failure Reason:
Command failed on ovh015 with status 1: '\n sudo yum -y install bison\n ' |
||||||||||||||
pass | 3308818 | 2018-12-05 05:22:46 | 2018-12-15 19:47:31 | 2018-12-15 21:55:32 | 2:08:01 | 0:38:16 | 1:29:45 | ovh | master | centos | 7.4 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/mds-full.yaml whitelist_health.yaml} | 6 | |
pass | 3308819 | 2018-12-05 05:22:47 | 2018-12-15 19:49:01 | 2018-12-15 21:03:01 | 1:14:00 | 0:44:37 | 0:29:23 | ovh | master | centos | 7.4 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/kclient_workunit_suites_dbench.yaml} | 3 | |
fail | 3308820 | 2018-12-05 05:22:48 | 2018-12-15 19:57:35 | 2018-12-15 21:29:35 | 1:32:00 | 1:19:11 | 0:12:49 | ovh | master | ubuntu | 16.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/kclient_workunit_suites_ffsb.yaml} | 3 | |
Failure Reason:
"2018-12-15 20:28:53.162494 mon.a mon.0 158.69.73.7:6789/0 161 : cluster [WRN] Health check failed: 1 MDSs report slow metadata IOs (MDS_SLOW_METADATA_IO)" in cluster log |
||||||||||||||
dead | 3308821 | 2018-12-05 05:22:48 | 2018-12-15 19:58:08 | 2018-12-16 08:00:20 | 12:02:12 | ovh | master | ubuntu | 18.04 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/pool-perm.yaml whitelist_health.yaml} | — | |||
pass | 3308822 | 2018-12-05 05:22:49 | 2018-12-15 19:59:38 | 2018-12-15 20:53:38 | 0:54:00 | 0:26:06 | 0:27:54 | ovh | master | centos | 7.4 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/kclient_workunit_suites_fsstress.yaml} | 3 | |
fail | 3308823 | 2018-12-05 05:22:50 | 2018-12-15 20:01:34 | 2018-12-15 20:19:34 | 0:18:00 | ovh | master | ubuntu | 18.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/kclient_workunit_suites_fsx.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh035.front.sepia.ceph.com |
||||||||||||||
pass | 3308824 | 2018-12-05 05:22:51 | 2018-12-15 20:07:15 | 2018-12-15 23:43:17 | 3:36:02 | 0:25:06 | 3:10:56 | ovh | master | ubuntu | 16.04 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/sessionmap.yaml whitelist_health.yaml} | 6 | |
pass | 3308825 | 2018-12-05 05:22:51 | 2018-12-15 20:11:15 | 2018-12-15 21:05:15 | 0:54:00 | 0:34:57 | 0:19:03 | ovh | master | centos | 7.4 | kcephfs/thrash/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} thrashers/mon.yaml thrashosds-health.yaml whitelist_health.yaml workloads/kclient_workunit_suites_iozone.yaml} | 3 | |
fail | 3308826 | 2018-12-05 05:22:52 | 2018-12-15 20:13:53 | 2018-12-15 21:17:53 | 1:04:00 | 0:08:39 | 0:55:21 | ovh | master | rhel | 7.5 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/kclient_workunit_suites_fsync.yaml} | 3 | |
Failure Reason:
Command failed on ovh003 with status 1: '\n sudo yum -y install bison\n ' |
||||||||||||||
pass | 3308827 | 2018-12-05 05:22:53 | 2018-12-15 20:15:38 | 2018-12-15 22:53:39 | 2:38:01 | 0:46:52 | 1:51:09 | ovh | master | centos | 7.4 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/strays.yaml whitelist_health.yaml} | 6 | |
fail | 3308828 | 2018-12-05 05:22:54 | 2018-12-15 20:17:31 | 2018-12-15 20:45:30 | 0:27:59 | ovh | master | ubuntu | 18.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/kclient_workunit_suites_iozone.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh081.front.sepia.ceph.com |
||||||||||||||
pass | 3308829 | 2018-12-05 05:22:54 | 2018-12-15 20:19:35 | 2018-12-15 20:55:34 | 0:35:59 | 0:17:43 | 0:18:16 | ovh | master | ubuntu | 16.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/kclient_workunit_suites_pjd.yaml} | 3 | |
fail | 3308830 | 2018-12-05 05:22:55 | 2018-12-15 20:29:35 | 2018-12-15 21:25:35 | 0:56:00 | ovh | master | ubuntu | 18.04 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/volume-client.yaml whitelist_health.yaml} | 6 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh100.front.sepia.ceph.com |
||||||||||||||
pass | 3308831 | 2018-12-05 05:22:56 | 2018-12-15 20:31:18 | 2018-12-15 21:09:18 | 0:38:00 | 0:16:18 | 0:21:42 | ovh | master | ubuntu | 16.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/kclient_workunit_trivial_sync.yaml} | 3 | |
fail | 3308832 | 2018-12-05 05:22:57 | 2018-12-15 20:35:15 | 2018-12-15 21:27:15 | 0:52:00 | 0:08:18 | 0:43:42 | ovh | master | rhel | 7.5 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/kclient_workunit_direct_io.yaml} | 3 | |
Failure Reason:
Command failed on ovh073 with status 1: '\n sudo yum -y install bison\n ' |
||||||||||||||
fail | 3308833 | 2018-12-05 05:22:57 | 2018-12-15 20:35:19 | 2018-12-15 21:05:19 | 0:30:00 | ovh | master | ubuntu | 18.04 | kcephfs/mixed-clients/{begin.yaml clusters/1-mds-2-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/kernel_cfuse_workunits_dbench_iozone.yaml} | 4 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh041.front.sepia.ceph.com |
||||||||||||||
pass | 3308834 | 2018-12-05 05:22:58 | 2018-12-15 20:35:32 | 2018-12-15 22:13:32 | 1:38:00 | 0:21:47 | 1:16:13 | ovh | master | ubuntu | 16.04 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/auto-repair.yaml whitelist_health.yaml} | 6 | |
fail | 3308835 | 2018-12-05 05:22:59 | 2018-12-15 20:41:37 | 2018-12-15 21:47:37 | 1:06:00 | 0:08:07 | 0:57:53 | ovh | master | rhel | 7.5 | kcephfs/thrash/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} thrashers/default.yaml thrashosds-health.yaml whitelist_health.yaml workloads/kclient_workunit_suites_ffsb.yaml} | 3 | |
Failure Reason:
Command failed on ovh099 with status 1: '\n sudo yum -y install bison\n ' |
||||||||||||||
fail | 3308836 | 2018-12-05 05:23:00 | 2018-12-15 20:45:32 | 2018-12-15 21:13:32 | 0:28:00 | ovh | master | ubuntu | 18.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/kclient_workunit_kernel_untar_build.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh091.front.sepia.ceph.com |
||||||||||||||
pass | 3308837 | 2018-12-05 05:23:00 | 2018-12-15 20:53:37 | 2018-12-16 00:51:40 | 3:58:03 | 0:22:37 | 3:35:26 | ovh | master | centos | 7.4 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/backtrace.yaml whitelist_health.yaml} | 6 | |
fail | 3308838 | 2018-12-05 05:23:01 | 2018-12-15 20:53:39 | 2018-12-15 21:17:38 | 0:23:59 | ovh | master | ubuntu | 18.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/kclient_workunit_misc.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh090.front.sepia.ceph.com |
||||||||||||||
fail | 3308839 | 2018-12-05 05:23:02 | 2018-12-15 20:55:15 | 2018-12-15 22:05:15 | 1:10:00 | 0:08:29 | 1:01:31 | ovh | master | rhel | 7.5 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/kclient_workunit_o_trunc.yaml} | 3 | |
Failure Reason:
Command failed on ovh040 with status 1: '\n sudo yum -y install bison\n ' |
||||||||||||||
pass | 3308840 | 2018-12-05 05:23:03 | 2018-12-15 20:55:20 | 2018-12-16 01:25:24 | 4:30:04 | 0:28:43 | 4:01:21 | ovh | master | ubuntu | 16.04 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/client-limits.yaml whitelist_health.yaml} | 6 | |
fail | 3308841 | 2018-12-05 05:23:03 | 2018-12-15 20:55:36 | 2018-12-15 21:49:35 | 0:53:59 | 0:08:34 | 0:45:25 | ovh | master | rhel | 7.5 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/kclient_workunit_snaps.yaml} | 3 | |
Failure Reason:
Command failed on ovh097 with status 1: '\n sudo yum -y install bison\n ' |
||||||||||||||
fail | 3308842 | 2018-12-05 05:23:04 | 2018-12-15 20:58:10 | 2018-12-15 21:52:10 | 0:54:00 | 0:10:10 | 0:43:50 | ovh | master | rhel | 7.5 | kcephfs/thrash/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} thrashers/mds.yaml thrashosds-health.yaml whitelist_health.yaml workloads/kclient_workunit_suites_iozone.yaml} | 3 | |
Failure Reason:
Command failed on ovh083 with status 1: '\n sudo yum -y install bison\n ' |
||||||||||||||
dead | 3308843 | 2018-12-05 05:23:05 | 2018-12-15 20:59:10 | 2018-12-16 09:01:21 | 12:02:11 | ovh | master | ubuntu | 16.04 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/client-recovery.yaml whitelist_health.yaml} | — | |||
fail | 3308844 | 2018-12-05 05:23:06 | 2018-12-15 21:03:14 | 2018-12-15 22:07:14 | 1:04:00 | 0:08:21 | 0:55:39 | ovh | master | rhel | 7.5 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/kclient_workunit_suites_dbench.yaml} | 3 | |
Failure Reason:
Command failed on ovh060 with status 1: '\n sudo yum -y install bison\n ' |
||||||||||||||
fail | 3308845 | 2018-12-05 05:23:06 | 2018-12-15 21:03:35 | 2018-12-15 21:59:35 | 0:56:00 | 0:08:07 | 0:47:53 | ovh | master | rhel | 7.5 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/kclient_workunit_suites_ffsb.yaml} | 3 | |
Failure Reason:
Command failed on ovh076 with status 1: '\n sudo yum -y install bison\n ' |
||||||||||||||
fail | 3308846 | 2018-12-05 05:23:07 | 2018-12-15 21:05:27 | 2018-12-15 22:19:28 | 1:14:01 | ovh | master | ubuntu | 18.04 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/config-commands.yaml whitelist_health.yaml} | 6 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh046.front.sepia.ceph.com |
||||||||||||||
fail | 3308847 | 2018-12-05 05:23:08 | 2018-12-15 21:05:27 | 2018-12-15 21:57:27 | 0:52:00 | 0:07:56 | 0:44:04 | ovh | master | rhel | 7.5 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/kclient_workunit_suites_fsstress.yaml} | 3 | |
Failure Reason:
Command failed on ovh033 with status 1: '\n sudo yum -y install bison\n ' |
||||||||||||||
fail | 3308848 | 2018-12-05 05:23:09 | 2018-12-15 21:09:30 | 2018-12-15 21:41:30 | 0:32:00 | ovh | master | ubuntu | 18.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/kclient_workunit_suites_fsx.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh094.front.sepia.ceph.com |
||||||||||||||
dead | 3308849 | 2018-12-05 05:23:09 | 2018-12-15 21:13:33 | 2018-12-16 09:15:44 | 12:02:11 | ovh | master | ubuntu | 16.04 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/damage.yaml whitelist_health.yaml} | — | |||
fail | 3308850 | 2018-12-05 05:23:10 | 2018-12-15 21:15:33 | 2018-12-15 21:37:32 | 0:21:59 | ovh | master | ubuntu | 18.04 | kcephfs/thrash/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} thrashers/mon.yaml thrashosds-health.yaml whitelist_health.yaml workloads/kclient_workunit_suites_ffsb.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh096.front.sepia.ceph.com |
||||||||||||||
pass | 3308851 | 2018-12-05 05:23:11 | 2018-12-15 21:17:40 | 2018-12-15 22:09:40 | 0:52:00 | 0:17:14 | 0:34:46 | ovh | master | ubuntu | 16.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/kclient_workunit_suites_fsync.yaml} | 3 | |
pass | 3308852 | 2018-12-05 05:23:12 | 2018-12-15 21:17:54 | 2018-12-15 23:13:55 | 1:56:01 | 0:39:26 | 1:16:35 | ovh | master | ubuntu | 16.04 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/data-scan.yaml whitelist_health.yaml} | 6 | |
pass | 3308853 | 2018-12-05 05:23:12 | 2018-12-15 21:25:37 | 2018-12-15 22:17:37 | 0:52:00 | 0:31:34 | 0:20:26 | ovh | master | centos | 7.4 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/kclient_workunit_suites_iozone.yaml} | 3 | |
fail | 3308854 | 2018-12-05 05:23:13 | 2018-12-15 21:27:28 | 2018-12-15 22:33:28 | 1:06:00 | 0:08:26 | 0:57:34 | ovh | master | rhel | 7.5 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/kclient_workunit_suites_pjd.yaml} | 3 | |
Failure Reason:
Command failed on ovh057 with status 1: '\n sudo yum -y install bison\n ' |
||||||||||||||
dead | 3308855 | 2018-12-05 05:23:14 | 2018-12-15 21:29:37 | 2018-12-16 09:31:48 | 12:02:11 | ovh | master | rhel | 7.5 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/failover.yaml whitelist_health.yaml} | — | |||
fail | 3308856 | 2018-12-05 05:23:15 | 2018-12-15 21:32:25 | 2018-12-15 22:48:25 | 1:16:00 | 0:08:14 | 1:07:46 | ovh | master | rhel | 7.5 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/kclient_workunit_trivial_sync.yaml} | 3 | |
Failure Reason:
Command failed on ovh053 with status 1: '\n sudo yum -y install bison\n ' |
||||||||||||||
fail | 3308857 | 2018-12-05 05:23:15 | 2018-12-15 21:37:37 | 2018-12-15 22:41:37 | 1:04:00 | 0:09:44 | 0:54:16 | ovh | master | rhel | 7.5 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/kclient_workunit_direct_io.yaml} | 3 | |
Failure Reason:
Command failed on ovh098 with status 1: '\n sudo yum -y install bison\n ' |
||||||||||||||
fail | 3308858 | 2018-12-05 05:23:16 | 2018-12-15 21:41:43 | 2018-12-15 22:13:43 | 0:32:00 | ovh | master | ubuntu | 18.04 | kcephfs/mixed-clients/{begin.yaml clusters/1-mds-2-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/kernel_cfuse_workunits_untarbuild_blogbench.yaml} | 4 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh078.front.sepia.ceph.com |
||||||||||||||
fail | 3308859 | 2018-12-05 05:23:17 | 2018-12-15 21:47:39 | 2018-12-15 23:55:40 | 2:08:01 | ovh | master | ubuntu | 18.04 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/forward-scrub.yaml whitelist_health.yaml} | 6 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh080.front.sepia.ceph.com |
||||||||||||||
fail | 3308860 | 2018-12-05 05:23:18 | 2018-12-15 21:49:37 | 2018-12-15 22:51:37 | 1:02:00 | 0:08:28 | 0:53:32 | ovh | master | rhel | 7.5 | kcephfs/thrash/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} thrashers/default.yaml thrashosds-health.yaml whitelist_health.yaml workloads/kclient_workunit_suites_iozone.yaml} | 3 | |
Failure Reason:
Command failed on ovh054 with status 1: '\n sudo yum -y install bison\n ' |
||||||||||||||
fail | 3308861 | 2018-12-05 05:23:18 | 2018-12-15 21:49:37 | 2018-12-15 22:05:37 | 0:16:00 | ovh | master | ubuntu | 18.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/kclient_workunit_kernel_untar_build.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh001.front.sepia.ceph.com |
||||||||||||||
fail | 3308862 | 2018-12-05 05:23:19 | 2018-12-15 21:52:12 | 2018-12-15 23:00:12 | 1:08:00 | 0:27:07 | 0:40:53 | ovh | master | centos | 7.4 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/journal-repair.yaml whitelist_health.yaml} | 6 | |
Failure Reason:
Test failure: test_reset (tasks.cephfs.test_journal_repair.TestJournalRepair), test_reset (tasks.cephfs.test_journal_repair.TestJournalRepair) |
||||||||||||||
fail | 3308863 | 2018-12-05 05:23:20 | 2018-12-15 21:55:41 | 2018-12-15 22:53:41 | 0:58:00 | 0:08:06 | 0:49:54 | ovh | master | rhel | 7.5 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/kclient_workunit_misc.yaml} | 3 | |
Failure Reason:
Command failed on ovh033 with status 1: '\n sudo yum -y install bison\n ' |
||||||||||||||
pass | 3308864 | 2018-12-05 05:23:20 | 2018-12-15 21:57:37 | 2018-12-15 22:45:37 | 0:48:00 | 0:26:12 | 0:21:48 | ovh | master | centos | 7.4 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/kclient_workunit_o_trunc.yaml} | 3 | |
pass | 3308865 | 2018-12-05 05:23:21 | 2018-12-15 21:59:37 | 2018-12-15 22:57:37 | 0:58:00 | 0:21:34 | 0:36:26 | ovh | master | ubuntu | 16.04 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/mds-flush.yaml whitelist_health.yaml} | 6 | |
fail | 3308866 | 2018-12-05 05:23:22 | 2018-12-15 22:05:27 | 2018-12-16 00:01:28 | 1:56:01 | 1:22:36 | 0:33:25 | ovh | master | centos | 7.4 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/kclient_workunit_snaps.yaml} | 3 | |
Failure Reason:
"2018-12-15 23:28:46.141174 mon.b mon.0 158.69.72.173:6789/0 980 : cluster [ERR] Health check failed: mon c is very low on available space (MON_DISK_CRIT)" in cluster log |
||||||||||||||
fail | 3308867 | 2018-12-05 05:23:23 | 2018-12-15 22:05:38 | 2018-12-15 23:01:38 | 0:56:00 | 0:34:30 | 0:21:30 | ovh | master | centos | 7.4 | kcephfs/thrash/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} thrashers/mds.yaml thrashosds-health.yaml whitelist_health.yaml workloads/kclient_workunit_suites_ffsb.yaml} | 3 | |
Failure Reason:
Command failed (workunit test suites/ffsb.sh) on ovh050 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=802ee2380274f15db187ddd1219533c6b233da6a TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/ffsb.sh' |
||||||||||||||
fail | 3308868 | 2018-12-05 05:23:23 | 2018-12-15 22:07:26 | 2018-12-16 03:29:30 | 5:22:04 | 0:11:07 | 5:10:57 | ovh | master | rhel | 7.5 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/mds-full.yaml whitelist_health.yaml} | 6 | |
Failure Reason:
Command failed on ovh036 with status 1: '\n sudo yum -y install bison\n ' |
||||||||||||||
fail | 3308869 | 2018-12-05 05:23:24 | 2018-12-15 22:09:48 | 2018-12-15 23:17:49 | 1:08:01 | 0:08:43 | 0:59:18 | ovh | master | rhel | 7.5 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/kclient_workunit_suites_dbench.yaml} | 3 | |
Failure Reason:
Command failed on ovh080 with status 1: '\n sudo yum -y install bison\n ' |
||||||||||||||
fail | 3308870 | 2018-12-05 05:23:25 | 2018-12-15 22:13:34 | 2018-12-15 23:11:34 | 0:58:00 | 0:08:16 | 0:49:44 | ovh | master | rhel | 7.5 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/kclient_workunit_suites_ffsb.yaml} | 3 | |
Failure Reason:
Command failed on ovh093 with status 1: '\n sudo yum -y install bison\n ' |
||||||||||||||
dead | 3308871 | 2018-12-05 05:23:25 | 2018-12-15 22:13:44 | 2018-12-16 10:15:55 | 12:02:11 | ovh | master | ubuntu | 18.04 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/pool-perm.yaml whitelist_health.yaml} | — | |||
fail | 3308872 | 2018-12-05 05:23:26 | 2018-12-15 22:17:42 | 2018-12-15 23:21:42 | 1:04:00 | 0:08:07 | 0:55:53 | ovh | master | rhel | 7.5 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/kclient_workunit_suites_fsstress.yaml} | 3 | |
Failure Reason:
Command failed on ovh056 with status 1: '\n sudo yum -y install bison\n ' |
||||||||||||||
pass | 3308873 | 2018-12-05 05:23:27 | 2018-12-15 22:17:42 | 2018-12-16 00:53:43 | 2:36:01 | 2:12:03 | 0:23:58 | ovh | master | ubuntu | 18.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/kclient_workunit_suites_fsx.yaml} | 3 | |
fail | 3308874 | 2018-12-05 05:23:28 | 2018-12-15 22:19:40 | 2018-12-16 00:37:41 | 2:18:01 | 0:11:59 | 2:06:02 | ovh | master | rhel | 7.5 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/sessionmap.yaml whitelist_health.yaml} | 6 | |
Failure Reason:
Command failed on ovh011 with status 1: '\n sudo yum -y install bison\n ' |
||||||||||||||
pass | 3308875 | 2018-12-05 05:23:28 | 2018-12-15 22:27:44 | 2018-12-15 23:27:43 | 0:59:59 | 0:32:28 | 0:27:31 | ovh | master | ubuntu | 16.04 | kcephfs/thrash/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} thrashers/mon.yaml thrashosds-health.yaml whitelist_health.yaml workloads/kclient_workunit_suites_iozone.yaml} | 3 | |
pass | 3308876 | 2018-12-05 05:23:29 | 2018-12-15 22:33:42 | 2018-12-15 23:09:42 | 0:36:00 | 0:15:51 | 0:20:09 | ovh | master | ubuntu | 16.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/kclient_workunit_suites_fsync.yaml} | 3 | |
fail | 3308877 | 2018-12-05 05:23:30 | 2018-12-15 22:41:50 | 2018-12-15 23:29:50 | 0:48:00 | ovh | master | ubuntu | 18.04 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/strays.yaml whitelist_health.yaml} | 6 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh085.front.sepia.ceph.com |
||||||||||||||
fail | 3308878 | 2018-12-05 05:23:30 | 2018-12-15 22:45:34 | 2018-12-15 23:09:34 | 0:24:00 | ovh | master | ubuntu | 18.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/kclient_workunit_suites_iozone.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh085.front.sepia.ceph.com |
||||||||||||||
pass | 3308879 | 2018-12-05 05:23:31 | 2018-12-15 22:45:38 | 2018-12-15 23:33:38 | 0:48:00 | 0:17:54 | 0:30:06 | ovh | master | ubuntu | 16.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/kclient_workunit_suites_pjd.yaml} | 3 | |
dead | 3308880 | 2018-12-05 05:23:32 | 2018-12-15 22:48:27 | 2018-12-16 10:50:37 | 12:02:10 | ovh | master | ubuntu | 18.04 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/volume-client.yaml whitelist_health.yaml} | — | |||
pass | 3308881 | 2018-12-05 05:23:33 | 2018-12-15 22:51:39 | 2018-12-15 23:37:39 | 0:46:00 | 0:15:44 | 0:30:16 | ovh | master | centos | 7.4 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/kclient_workunit_trivial_sync.yaml} | 3 |