User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Pass | Fail | Dead |
---|---|---|---|---|---|---|---|---|---|---|---|
teuthology | 2018-12-17 05:20:02 | 2019-01-01 14:26:53 | 2019-01-02 05:59:55 | 15:33:02 | kcephfs | mimic | ovh | a64198e | 95 | 134 | 21 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
fail | 3370851 | 2018-12-17 05:20:22 | 2018-12-31 23:25:18 | 2019-01-01 00:37:18 | 1:12:00 | 0:09:13 | 1:02:47 | ovh | master | rhel | 7.5 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/kclient_workunit_direct_io.yaml} | 3 | |
Failure Reason:
Command failed on ovh053 with status 1: '\n sudo yum -y install bison\n ' |
||||||||||||||
fail | 3370852 | 2018-12-17 05:20:22 | 2018-12-31 23:26:47 | 2019-01-01 02:24:49 | 2:58:02 | ovh | master | ubuntu | 18.04 | kcephfs/mixed-clients/{begin.yaml clusters/1-mds-2-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/kernel_cfuse_workunits_dbench_iozone.yaml} | 4 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh024.front.sepia.ceph.com |
||||||||||||||
fail | 3370853 | 2018-12-17 05:20:23 | 2018-12-31 23:26:47 | 2019-01-01 08:26:55 | 9:00:08 | 0:11:11 | 8:48:57 | ovh | master | rhel | 7.5 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/auto-repair.yaml whitelist_health.yaml} | 6 | |
Failure Reason:
Command failed on ovh084 with status 1: '\n sudo yum -y install bison\n ' |
||||||||||||||
fail | 3370854 | 2018-12-17 05:20:24 | 2018-12-31 23:28:55 | 2019-01-01 01:00:56 | 1:32:01 | 1:06:48 | 0:25:13 | ovh | master | ubuntu | 16.04 | kcephfs/thrash/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} thrashers/default.yaml thrashosds-health.yaml whitelist_health.yaml workloads/kclient_workunit_suites_ffsb.yaml} | 3 | |
Failure Reason:
"2019-01-01 00:23:00.524307 mon.a mon.0 158.69.66.92:6789/0 2334 : cluster [WRN] Health check failed: 1 MDSs report slow metadata IOs (MDS_SLOW_METADATA_IO)" in cluster log |
||||||||||||||
pass | 3370855 | 2018-12-17 05:20:25 | 2018-12-31 23:33:04 | 2019-01-01 01:13:04 | 1:40:00 | 0:58:41 | 0:41:19 | ovh | master | centos | 7.4 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/kclient_workunit_kernel_untar_build.yaml} | 3 | |
fail | 3370856 | 2018-12-17 05:20:25 | 2018-12-31 23:35:06 | 2019-01-01 10:19:16 | 10:44:10 | 0:11:05 | 10:33:05 | ovh | master | rhel | 7.5 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/backtrace.yaml whitelist_health.yaml} | 6 | |
Failure Reason:
Command failed on ovh031 with status 1: '\n sudo yum -y install bison\n ' |
||||||||||||||
pass | 3370857 | 2018-12-17 05:20:26 | 2018-12-31 23:53:04 | 2019-01-01 01:17:04 | 1:24:00 | 0:59:51 | 0:24:09 | ovh | master | ubuntu | 16.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/kclient_workunit_misc.yaml} | 3 | |
fail | 3370858 | 2018-12-17 05:20:27 | 2018-12-31 23:53:56 | 2019-01-01 01:13:57 | 1:20:01 | 0:09:09 | 1:10:52 | ovh | master | rhel | 7.5 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/kclient_workunit_o_trunc.yaml} | 3 | |
Failure Reason:
Command failed on ovh052 with status 1: '\n sudo yum -y install bison\n ' |
||||||||||||||
dead | 3370859 | 2018-12-17 05:20:27 | 2018-12-31 23:58:44 | 2019-01-01 12:00:56 | 12:02:12 | ovh | master | ubuntu | 18.04 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/client-limits.yaml whitelist_health.yaml} | — | |||
fail | 3370860 | 2018-12-17 05:20:28 | 2019-01-01 00:00:37 | 2019-01-01 01:20:38 | 1:20:01 | 0:09:27 | 1:10:34 | ovh | master | rhel | 7.5 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/kclient_workunit_snaps.yaml} | 3 | |
Failure Reason:
Command failed on ovh087 with status 1: '\n sudo yum -y install bison\n ' |
||||||||||||||
pass | 3370861 | 2018-12-17 05:20:29 | 2019-01-01 00:00:55 | 2019-01-01 01:10:56 | 1:10:01 | 0:34:40 | 0:35:21 | ovh | master | ubuntu | 16.04 | kcephfs/thrash/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} thrashers/mds.yaml thrashosds-health.yaml whitelist_health.yaml workloads/kclient_workunit_suites_iozone.yaml} | 3 | |
dead | 3370862 | 2018-12-17 05:20:30 | 2019-01-01 00:08:48 | 2019-01-01 12:20:49 | 12:12:01 | ovh | master | centos | 7.4 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/client-recovery.yaml whitelist_health.yaml} | 6 | |||
fail | 3370863 | 2018-12-17 05:20:30 | 2019-01-01 00:13:09 | 2019-01-01 01:19:09 | 1:06:00 | 0:10:04 | 0:55:56 | ovh | master | rhel | 7.5 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/kclient_workunit_suites_dbench.yaml} | 3 | |
Failure Reason:
Command failed on ovh026 with status 1: '\n sudo yum -y install bison\n ' |
||||||||||||||
fail | 3370864 | 2018-12-17 05:20:31 | 2019-01-01 00:14:42 | 2019-01-01 00:38:41 | 0:23:59 | ovh | master | ubuntu | 18.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/kclient_workunit_suites_ffsb.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh048.front.sepia.ceph.com |
||||||||||||||
dead | 3370865 | 2018-12-17 05:20:32 | 2019-01-01 00:22:29 | 2019-01-01 12:25:11 | 12:02:42 | ovh | master | ubuntu | 16.04 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/config-commands.yaml whitelist_health.yaml} | 6 | |||
fail | 3370866 | 2018-12-17 05:20:32 | 2019-01-01 00:23:07 | 2019-01-01 01:43:08 | 1:20:01 | 0:09:08 | 1:10:53 | ovh | master | rhel | 7.5 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/kclient_workunit_suites_fsstress.yaml} | 3 | |
Failure Reason:
Command failed on ovh072 with status 1: '\n sudo yum -y install bison\n ' |
||||||||||||||
pass | 3370867 | 2018-12-17 05:20:33 | 2019-01-01 00:32:51 | 2019-01-01 01:36:51 | 1:04:00 | 0:49:24 | 0:14:36 | ovh | master | centos | 7.4 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/kclient_workunit_suites_fsx.yaml} | 3 | |
dead | 3370868 | 2018-12-17 05:20:34 | 2019-01-01 00:34:22 | 2019-01-01 12:36:34 | 12:02:12 | ovh | master | ubuntu | 18.04 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/damage.yaml whitelist_health.yaml} | — | |||
fail | 3370869 | 2018-12-17 05:20:35 | 2019-01-01 00:37:31 | 2019-01-01 01:05:31 | 0:28:00 | ovh | master | ubuntu | 18.04 | kcephfs/thrash/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} thrashers/mon.yaml thrashosds-health.yaml whitelist_health.yaml workloads/kclient_workunit_suites_ffsb.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh100.front.sepia.ceph.com |
||||||||||||||
pass | 3370870 | 2018-12-17 05:20:35 | 2019-01-01 00:38:50 | 2019-01-01 01:20:50 | 0:42:00 | 0:20:28 | 0:21:32 | ovh | master | ubuntu | 18.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/kclient_workunit_suites_fsync.yaml} | 3 | |
fail | 3370871 | 2018-12-17 05:20:36 | 2019-01-01 00:38:53 | 2019-01-01 11:59:09 | 11:20:16 | ovh | master | ubuntu | 18.04 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/data-scan.yaml whitelist_health.yaml} | 6 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh076.front.sepia.ceph.com |
||||||||||||||
pass | 3370872 | 2018-12-17 05:20:37 | 2019-01-01 00:54:46 | 2019-01-01 02:08:46 | 1:14:00 | 0:44:50 | 0:29:10 | ovh | master | centos | 7.4 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/kclient_workunit_suites_iozone.yaml} | 3 | |
pass | 3370873 | 2018-12-17 05:20:37 | 2019-01-01 01:01:09 | 2019-01-01 01:59:09 | 0:58:00 | 0:18:48 | 0:39:12 | ovh | master | centos | 7.4 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/kclient_workunit_suites_pjd.yaml} | 3 | |
pass | 3370874 | 2018-12-17 05:20:38 | 2019-01-01 01:05:43 | 2019-01-01 09:23:51 | 8:18:08 | 0:37:14 | 7:40:54 | ovh | master | ubuntu | 16.04 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/failover.yaml whitelist_health.yaml} | 6 | |
pass | 3370875 | 2018-12-17 05:20:39 | 2019-01-01 01:10:55 | 2019-01-01 02:00:55 | 0:50:00 | 0:15:52 | 0:34:08 | ovh | master | ubuntu | 16.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/kclient_workunit_trivial_sync.yaml} | 3 | |
fail | 3370876 | 2018-12-17 05:20:39 | 2019-01-01 01:10:57 | 2019-01-01 01:26:56 | 0:15:59 | ovh | master | ubuntu | 18.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/kclient_workunit_direct_io.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh002.front.sepia.ceph.com |
||||||||||||||
fail | 3370877 | 2018-12-17 05:20:40 | 2019-01-01 01:13:01 | 2019-01-01 01:49:01 | 0:36:00 | ovh | master | ubuntu | 18.04 | kcephfs/mixed-clients/{begin.yaml clusters/1-mds-2-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/kernel_cfuse_workunits_untarbuild_blogbench.yaml} | 4 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh002.front.sepia.ceph.com |
||||||||||||||
fail | 3370878 | 2018-12-17 05:20:41 | 2019-01-01 01:13:05 | 2019-01-01 12:01:15 | 10:48:10 | 0:10:37 | 10:37:33 | ovh | master | rhel | 7.5 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/forward-scrub.yaml whitelist_health.yaml} | 6 | |
Failure Reason:
Command failed on ovh056 with status 1: '\n sudo yum -y install bison\n ' |
||||||||||||||
fail | 3370879 | 2018-12-17 05:20:42 | 2019-01-01 01:13:58 | 2019-01-01 01:39:57 | 0:25:59 | ovh | master | ubuntu | 18.04 | kcephfs/thrash/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} thrashers/default.yaml thrashosds-health.yaml whitelist_health.yaml workloads/kclient_workunit_suites_iozone.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh063.front.sepia.ceph.com |
||||||||||||||
fail | 3370880 | 2018-12-17 05:20:42 | 2019-01-01 01:17:16 | 2019-01-01 01:37:15 | 0:19:59 | ovh | master | ubuntu | 18.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/kclient_workunit_kernel_untar_build.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh032.front.sepia.ceph.com |
||||||||||||||
fail | 3370881 | 2018-12-17 05:20:43 | 2019-01-01 01:18:50 | 2019-01-01 06:58:55 | 5:40:05 | 0:11:24 | 5:28:41 | ovh | master | rhel | 7.5 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/journal-repair.yaml whitelist_health.yaml} | 6 | |
Failure Reason:
Command failed on ovh029 with status 1: '\n sudo yum -y install bison\n ' |
||||||||||||||
pass | 3370882 | 2018-12-17 05:20:44 | 2019-01-01 01:19:10 | 2019-01-01 03:01:11 | 1:42:01 | 1:04:00 | 0:38:01 | ovh | master | centos | 7.4 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/kclient_workunit_misc.yaml} | 3 | |
fail | 3370883 | 2018-12-17 05:20:44 | 2019-01-01 01:20:49 | 2019-01-01 02:20:49 | 1:00:00 | 0:09:48 | 0:50:12 | ovh | master | rhel | 7.5 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/kclient_workunit_o_trunc.yaml} | 3 | |
Failure Reason:
Command failed on ovh033 with status 1: '\n sudo yum -y install bison\n ' |
||||||||||||||
fail | 3370884 | 2018-12-17 05:20:45 | 2019-01-01 01:20:51 | 2019-01-01 07:42:57 | 6:22:06 | ovh | master | ubuntu | 18.04 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/mds-flush.yaml whitelist_health.yaml} | 6 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh075.front.sepia.ceph.com |
||||||||||||||
fail | 3370885 | 2018-12-17 05:20:46 | 2019-01-01 01:22:46 | 2019-01-01 01:52:46 | 0:30:00 | ovh | master | ubuntu | 18.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/kclient_workunit_snaps.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh083.front.sepia.ceph.com |
||||||||||||||
fail | 3370886 | 2018-12-17 05:20:46 | 2019-01-01 01:27:07 | 2019-01-01 02:47:07 | 1:20:00 | 0:08:30 | 1:11:30 | ovh | master | rhel | 7.5 | kcephfs/thrash/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} thrashers/mds.yaml thrashosds-health.yaml whitelist_health.yaml workloads/kclient_workunit_suites_ffsb.yaml} | 3 | |
Failure Reason:
Command failed on ovh026 with status 1: '\n sudo yum -y install bison\n ' |
||||||||||||||
dead | 3370887 | 2018-12-17 05:20:47 | 2019-01-01 01:27:21 | 2019-01-01 13:29:33 | 12:02:12 | ovh | master | centos | 7.4 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/mds-full.yaml whitelist_health.yaml} | — | |||
pass | 3370888 | 2018-12-17 05:20:48 | 2019-01-01 01:37:02 | 2019-01-01 02:47:02 | 1:10:00 | 0:46:37 | 0:23:23 | ovh | master | centos | 7.4 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/kclient_workunit_suites_dbench.yaml} | 3 | |
fail | 3370889 | 2018-12-17 05:20:49 | 2019-01-01 01:37:16 | 2019-01-01 02:47:17 | 1:10:01 | 0:08:40 | 1:01:21 | ovh | master | rhel | 7.5 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/kclient_workunit_suites_ffsb.yaml} | 3 | |
Failure Reason:
Command failed on ovh060 with status 1: '\n sudo yum -y install bison\n ' |
||||||||||||||
dead | 3370890 | 2018-12-17 05:20:49 | 2019-01-01 01:40:09 | 2019-01-01 13:42:21 | 12:02:12 | ovh | master | rhel | 7.5 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/pool-perm.yaml whitelist_health.yaml} | — | |||
fail | 3370891 | 2018-12-17 05:20:50 | 2019-01-01 01:41:01 | 2019-01-01 02:05:00 | 0:23:59 | ovh | master | ubuntu | 18.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/kclient_workunit_suites_fsstress.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh049.front.sepia.ceph.com |
||||||||||||||
pass | 3370892 | 2018-12-17 05:20:51 | 2019-01-01 01:43:19 | 2019-01-01 02:55:19 | 1:12:00 | 0:47:33 | 0:24:27 | ovh | master | centos | 7.4 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/kclient_workunit_suites_fsx.yaml} | 3 | |
dead | 3370893 | 2018-12-17 05:20:51 | 2019-01-01 01:48:26 | 2019-01-01 13:50:38 | 12:02:12 | ovh | master | centos | 7.4 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/sessionmap.yaml whitelist_health.yaml} | — | |||
pass | 3370894 | 2018-12-17 05:20:52 | 2019-01-01 01:49:11 | 2019-01-01 02:55:11 | 1:06:00 | 0:35:09 | 0:30:51 | ovh | master | centos | 7.4 | kcephfs/thrash/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} thrashers/mon.yaml thrashosds-health.yaml whitelist_health.yaml workloads/kclient_workunit_suites_iozone.yaml} | 3 | |
fail | 3370895 | 2018-12-17 05:20:53 | 2019-01-01 01:52:58 | 2019-01-01 02:54:58 | 1:02:00 | 0:08:57 | 0:53:03 | ovh | master | rhel | 7.5 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/kclient_workunit_suites_fsync.yaml} | 3 | |
Failure Reason:
Command failed on ovh002 with status 1: '\n sudo yum -y install bison\n ' |
||||||||||||||
dead | 3370896 | 2018-12-17 05:20:54 | 2019-01-01 01:59:22 | 2019-01-01 14:01:34 | 12:02:12 | ovh | master | rhel | 7.5 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/strays.yaml whitelist_health.yaml} | — | |||
pass | 3370897 | 2018-12-17 05:20:54 | 2019-01-01 02:00:56 | 2019-01-01 03:22:57 | 1:22:01 | 0:51:53 | 0:30:08 | ovh | master | ubuntu | 16.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/kclient_workunit_suites_iozone.yaml} | 3 | |
pass | 3370898 | 2018-12-17 05:20:55 | 2019-01-01 02:00:57 | 2019-01-01 02:50:56 | 0:49:59 | 0:18:59 | 0:31:00 | ovh | master | ubuntu | 16.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/kclient_workunit_suites_pjd.yaml} | 3 | |
pass | 3370899 | 2018-12-17 05:20:56 | 2019-01-01 02:05:12 | 2019-01-01 03:53:13 | 1:48:01 | 0:37:08 | 1:10:53 | ovh | master | ubuntu | 16.04 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/volume-client.yaml whitelist_health.yaml} | 6 | |
pass | 3370900 | 2018-12-17 05:20:57 | 2019-01-01 02:08:42 | 2019-01-01 02:50:42 | 0:42:00 | 0:16:43 | 0:25:17 | ovh | master | ubuntu | 16.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/kclient_workunit_trivial_sync.yaml} | 3 | |
pass | 3370901 | 2018-12-17 05:20:57 | 2019-01-01 02:08:47 | 2019-01-01 03:08:47 | 1:00:00 | 0:20:47 | 0:39:13 | ovh | master | centos | 7.4 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/kclient_workunit_direct_io.yaml} | 3 | |
fail | 3370902 | 2018-12-17 05:20:58 | 2019-01-01 02:21:00 | 2019-01-01 03:11:00 | 0:50:00 | ovh | master | ubuntu | 18.04 | kcephfs/mixed-clients/{begin.yaml clusters/1-mds-2-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/kernel_cfuse_workunits_dbench_iozone.yaml} | 4 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh048.front.sepia.ceph.com |
||||||||||||||
pass | 3370903 | 2018-12-17 05:20:59 | 2019-01-01 02:25:00 | 2019-01-01 12:03:09 | 9:38:09 | 0:22:02 | 9:16:07 | ovh | master | centos | 7.4 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/auto-repair.yaml whitelist_health.yaml} | 6 | |
fail | 3370904 | 2018-12-17 05:20:59 | 2019-01-01 03:34:57 | 2019-01-01 05:02:58 | 1:28:01 | 0:42:48 | 0:45:13 | ovh | master | ubuntu | 16.04 | kcephfs/thrash/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} thrashers/default.yaml thrashosds-health.yaml whitelist_health.yaml workloads/kclient_workunit_suites_ffsb.yaml} | 3 | |
Failure Reason:
"2019-01-01 04:39:51.601470 mon.a mon.0 158.69.64.122:6789/0 1639 : cluster [WRN] Health check failed: 1 MDSs report slow metadata IOs (MDS_SLOW_METADATA_IO)" in cluster log |
||||||||||||||
fail | 3370905 | 2018-12-17 05:21:00 | 2019-01-01 03:53:16 | 2019-01-01 04:57:16 | 1:04:00 | 0:09:06 | 0:54:54 | ovh | master | rhel | 7.5 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/kclient_workunit_kernel_untar_build.yaml} | 3 | |
Failure Reason:
Command failed on ovh041 with status 1: '\n sudo yum -y install bison\n ' |
||||||||||||||
pass | 3370906 | 2018-12-17 05:21:01 | 2019-01-01 03:57:25 | 2019-01-01 11:29:33 | 7:32:08 | 0:20:37 | 7:11:31 | ovh | master | centos | 7.4 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/backtrace.yaml whitelist_health.yaml} | 6 | |
fail | 3370907 | 2018-12-17 05:21:02 | 2019-01-01 04:07:00 | 2019-01-01 04:28:59 | 0:21:59 | ovh | master | ubuntu | 18.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/kclient_workunit_misc.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh058.front.sepia.ceph.com |
||||||||||||||
fail | 3370908 | 2018-12-17 05:21:02 | 2019-01-01 04:11:15 | 2019-01-01 05:31:15 | 1:20:00 | 0:08:57 | 1:11:03 | ovh | master | rhel | 7.5 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/kclient_workunit_o_trunc.yaml} | 3 | |
Failure Reason:
Command failed on ovh033 with status 1: '\n sudo yum -y install bison\n ' |
||||||||||||||
pass | 3370909 | 2018-12-17 05:21:03 | 2019-01-01 04:29:10 | 2019-01-01 11:37:16 | 7:08:06 | 0:31:40 | 6:36:26 | ovh | master | centos | 7.4 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/client-limits.yaml whitelist_health.yaml} | 6 | |
fail | 3370910 | 2018-12-17 05:21:04 | 2019-01-01 04:41:02 | 2019-01-01 06:27:03 | 1:46:01 | 1:30:44 | 0:15:17 | ovh | master | centos | 7.4 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/kclient_workunit_snaps.yaml} | 3 | |
Failure Reason:
"2019-01-01 05:44:26.168573 mon.b mon.0 158.69.68.6:6789/0 403 : cluster [ERR] Health check failed: mon c is very low on available space (MON_DISK_CRIT)" in cluster log |
||||||||||||||
fail | 3370911 | 2018-12-17 05:21:05 | 2019-01-01 04:57:19 | 2019-01-01 05:17:18 | 0:19:59 | ovh | master | ubuntu | 18.04 | kcephfs/thrash/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} thrashers/mds.yaml thrashosds-health.yaml whitelist_health.yaml workloads/kclient_workunit_suites_iozone.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh094.front.sepia.ceph.com |
||||||||||||||
dead | 3370912 | 2018-12-17 05:21:05 | 2019-01-01 09:52:15 | 2019-01-01 22:09:08 | 12:16:53 | ovh | master | centos | 7.4 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/client-recovery.yaml whitelist_health.yaml} | 6 | |||
fail | 3370913 | 2018-12-17 05:21:06 | 2019-01-01 09:56:50 | 2019-01-01 10:16:49 | 0:19:59 | ovh | master | ubuntu | 18.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/kclient_workunit_suites_dbench.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh086.front.sepia.ceph.com |
||||||||||||||
fail | 3370914 | 2018-12-17 05:21:07 | 2019-01-01 09:59:39 | 2019-01-01 11:33:45 | 1:34:06 | 1:18:13 | 0:15:53 | ovh | master | ubuntu | 16.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/kclient_workunit_suites_ffsb.yaml} | 3 | |
Failure Reason:
"2019-01-01 10:39:07.357231 mon.b mon.0 158.69.72.173:6789/0 143 : cluster [WRN] Health check failed: 1 MDSs report slow metadata IOs (MDS_SLOW_METADATA_IO)" in cluster log |
||||||||||||||
pass | 3370915 | 2018-12-17 05:21:08 | 2019-01-01 09:59:39 | 2019-01-01 14:33:43 | 4:34:04 | 0:24:04 | 4:10:00 | ovh | master | centos | 7.4 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/config-commands.yaml whitelist_health.yaml} | 6 | |
fail | 3370916 | 2018-12-17 05:21:08 | 2019-01-01 10:01:49 | 2019-01-01 10:29:48 | 0:27:59 | ovh | master | ubuntu | 18.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/kclient_workunit_suites_fsstress.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh067.front.sepia.ceph.com |
||||||||||||||
fail | 3370917 | 2018-12-17 05:21:09 | 2019-01-01 10:02:26 | 2019-01-01 11:00:26 | 0:58:00 | 0:08:47 | 0:49:13 | ovh | master | rhel | 7.5 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/kclient_workunit_suites_fsx.yaml} | 3 | |
Failure Reason:
Command failed on ovh066 with status 1: '\n sudo yum -y install bison\n ' |
||||||||||||||
dead | 3370918 | 2018-12-17 05:21:10 | 2019-01-01 10:03:33 | 2019-01-01 22:05:44 | 12:02:11 | ovh | master | rhel | 7.5 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/damage.yaml whitelist_health.yaml} | — | |||
fail | 3370919 | 2018-12-17 05:21:11 | 2019-01-01 10:04:08 | 2019-01-01 11:12:08 | 1:08:00 | 0:08:48 | 0:59:12 | ovh | master | rhel | 7.5 | kcephfs/thrash/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} thrashers/mon.yaml thrashosds-health.yaml whitelist_health.yaml workloads/kclient_workunit_suites_ffsb.yaml} | 3 | |
Failure Reason:
Command failed on ovh100 with status 1: '\n sudo yum -y install bison\n ' |
||||||||||||||
pass | 3370920 | 2018-12-17 05:21:12 | 2019-01-01 10:06:08 | 2019-01-01 10:38:08 | 0:32:00 | 0:18:19 | 0:13:41 | ovh | master | ubuntu | 16.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/kclient_workunit_suites_fsync.yaml} | 3 | |
dead | 3370921 | 2018-12-17 05:21:12 | 2019-01-01 10:06:53 | 2019-01-01 22:09:05 | 12:02:12 | ovh | master | ubuntu | 18.04 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/data-scan.yaml whitelist_health.yaml} | — | |||
fail | 3370922 | 2018-12-17 05:21:13 | 2019-01-01 10:08:03 | 2019-01-01 11:08:03 | 1:00:00 | 0:10:15 | 0:49:45 | ovh | master | rhel | 7.5 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/kclient_workunit_suites_iozone.yaml} | 3 | |
Failure Reason:
Command failed on ovh074 with status 1: '\n sudo yum -y install bison\n ' |
||||||||||||||
fail | 3370923 | 2018-12-17 05:21:14 | 2019-01-01 10:10:08 | 2019-01-01 10:28:07 | 0:17:59 | ovh | master | ubuntu | 18.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/kclient_workunit_suites_pjd.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh050.front.sepia.ceph.com |
||||||||||||||
pass | 3370924 | 2018-12-17 05:21:14 | 2019-01-01 10:11:53 | 2019-01-01 12:33:55 | 2:22:02 | 0:41:12 | 1:40:50 | ovh | master | centos | 7.4 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/failover.yaml whitelist_health.yaml} | 6 | |
fail | 3370925 | 2018-12-17 05:21:15 | 2019-01-01 10:11:53 | 2019-01-01 10:25:53 | 0:14:00 | ovh | master | ubuntu | 18.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/kclient_workunit_trivial_sync.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh011.front.sepia.ceph.com |
||||||||||||||
pass | 3370926 | 2018-12-17 05:21:16 | 2019-01-01 10:17:01 | 2019-01-01 11:13:01 | 0:56:00 | 0:20:47 | 0:35:13 | ovh | master | centos | 7.4 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/kclient_workunit_direct_io.yaml} | 3 | |
fail | 3370927 | 2018-12-17 05:21:17 | 2019-01-01 10:19:28 | 2019-01-01 11:37:28 | 1:18:00 | 0:09:25 | 1:08:35 | ovh | master | rhel | 7.5 | kcephfs/mixed-clients/{begin.yaml clusters/1-mds-2-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/kernel_cfuse_workunits_untarbuild_blogbench.yaml} | 4 | |
Failure Reason:
Command failed on ovh097 with status 1: '\n sudo yum -y install bison\n ' |
||||||||||||||
fail | 3370928 | 2018-12-17 05:21:18 | 2019-01-01 10:23:58 | 2019-01-01 14:20:01 | 3:56:03 | ovh | master | ubuntu | 18.04 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/forward-scrub.yaml whitelist_health.yaml} | 6 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh064.front.sepia.ceph.com |
||||||||||||||
pass | 3370929 | 2018-12-17 05:21:18 | 2019-01-01 10:23:58 | 2019-01-01 11:05:58 | 0:42:00 | 0:34:26 | 0:07:34 | ovh | master | ubuntu | 16.04 | kcephfs/thrash/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} thrashers/default.yaml thrashosds-health.yaml whitelist_health.yaml workloads/kclient_workunit_suites_iozone.yaml} | 3 | |
pass | 3370930 | 2018-12-17 05:21:19 | 2019-01-01 10:26:04 | 2019-01-01 12:14:05 | 1:48:01 | 1:17:08 | 0:30:53 | ovh | master | centos | 7.4 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/kclient_workunit_kernel_untar_build.yaml} | 3 | |
fail | 3370931 | 2018-12-17 05:21:20 | 2019-01-01 10:27:15 | 2019-01-01 19:15:22 | 8:48:07 | 0:32:12 | 8:15:55 | ovh | master | centos | 7.4 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/journal-repair.yaml whitelist_health.yaml} | 6 | |
Failure Reason:
Test failure: test_reset (tasks.cephfs.test_journal_repair.TestJournalRepair), test_reset (tasks.cephfs.test_journal_repair.TestJournalRepair) |
||||||||||||||
fail | 3370932 | 2018-12-17 05:21:20 | 2019-01-01 10:27:53 | 2019-01-01 11:23:53 | 0:56:00 | 0:08:49 | 0:47:11 | ovh | master | rhel | 7.5 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/kclient_workunit_misc.yaml} | 3 | |
Failure Reason:
Command failed on ovh024 with status 1: '\n sudo yum -y install bison\n ' |
||||||||||||||
pass | 3370933 | 2018-12-17 05:21:21 | 2019-01-01 10:28:08 | 2019-01-01 11:22:08 | 0:54:00 | 0:26:49 | 0:27:11 | ovh | master | ubuntu | 16.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/kclient_workunit_o_trunc.yaml} | 3 | |
pass | 3370934 | 2018-12-17 05:21:22 | 2019-01-01 10:28:27 | 2019-01-01 12:54:29 | 2:26:02 | 0:21:14 | 2:04:48 | ovh | master | centos | 7.4 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/mds-flush.yaml whitelist_health.yaml} | 6 | |
pass | 3370935 | 2018-12-17 05:21:23 | 2019-01-01 10:30:01 | 2019-01-01 12:20:02 | 1:50:01 | 1:32:26 | 0:17:35 | ovh | master | ubuntu | 18.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/kclient_workunit_snaps.yaml} | 3 | |
fail | 3370936 | 2018-12-17 05:21:23 | 2019-01-01 10:33:56 | 2019-01-01 11:01:56 | 0:28:00 | ovh | master | ubuntu | 18.04 | kcephfs/thrash/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} thrashers/mds.yaml thrashosds-health.yaml whitelist_health.yaml workloads/kclient_workunit_suites_ffsb.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh041.front.sepia.ceph.com |
||||||||||||||
fail | 3370937 | 2018-12-17 05:21:24 | 2019-01-01 10:36:03 | 2019-01-01 14:02:05 | 3:26:02 | 0:10:40 | 3:15:22 | ovh | master | rhel | 7.5 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/mds-full.yaml whitelist_health.yaml} | 6 | |
Failure Reason:
Command failed on ovh009 with status 1: '\n sudo yum -y install bison\n ' |
||||||||||||||
pass | 3370938 | 2018-12-17 05:21:25 | 2019-01-01 10:38:20 | 2019-01-01 11:46:20 | 1:08:00 | 0:45:06 | 0:22:54 | ovh | master | centos | 7.4 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/kclient_workunit_suites_dbench.yaml} | 3 | |
fail | 3370939 | 2018-12-17 05:21:26 | 2019-01-01 10:42:04 | 2019-01-01 12:04:05 | 1:22:01 | 1:10:33 | 0:11:28 | ovh | master | ubuntu | 16.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/kclient_workunit_suites_ffsb.yaml} | 3 | |
Failure Reason:
"2019-01-01 11:13:04.598635 mon.b mon.0 158.69.70.14:6789/0 148 : cluster [WRN] Health check failed: 1 MDSs report slow metadata IOs (MDS_SLOW_METADATA_IO)" in cluster log |
||||||||||||||
dead | 3370940 | 2018-12-17 05:21:26 | 2019-01-01 10:43:20 | 2019-01-01 22:50:40 | 12:07:20 | ovh | master | centos | 7.4 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/pool-perm.yaml whitelist_health.yaml} | 6 | |||
fail | 3370941 | 2018-12-17 05:21:27 | 2019-01-01 10:44:02 | 2019-01-01 11:50:02 | 1:06:00 | 0:09:05 | 0:56:55 | ovh | master | rhel | 7.5 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/kclient_workunit_suites_fsstress.yaml} | 3 | |
Failure Reason:
Command failed on ovh038 with status 1: '\n sudo yum -y install bison\n ' |
||||||||||||||
pass | 3370942 | 2018-12-17 05:21:28 | 2019-01-01 10:47:57 | 2019-01-01 12:15:57 | 1:28:00 | 1:14:23 | 0:13:37 | ovh | master | ubuntu | 16.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/kclient_workunit_suites_fsx.yaml} | 3 | |
pass | 3370943 | 2018-12-17 05:21:29 | 2019-01-01 10:52:14 | 2019-01-01 12:12:14 | 1:20:00 | 0:30:32 | 0:49:28 | ovh | master | centos | 7.4 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/sessionmap.yaml whitelist_health.yaml} | 6 | |
fail | 3370944 | 2018-12-17 05:21:29 | 2019-01-01 10:56:13 | 2019-01-01 12:02:13 | 1:06:00 | 0:08:28 | 0:57:32 | ovh | master | rhel | 7.5 | kcephfs/thrash/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} thrashers/mon.yaml thrashosds-health.yaml whitelist_health.yaml workloads/kclient_workunit_suites_iozone.yaml} | 3 | |
Failure Reason:
Command failed on ovh082 with status 1: '\n sudo yum -y install bison\n ' |
||||||||||||||
pass | 3370945 | 2018-12-17 05:21:30 | 2019-01-01 11:00:38 | 2019-01-01 11:36:38 | 0:36:00 | 0:18:19 | 0:17:41 | ovh | master | centos | 7.4 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/kclient_workunit_suites_fsync.yaml} | 3 | |
pass | 3370946 | 2018-12-17 05:21:31 | 2019-01-01 11:02:08 | 2019-01-01 15:46:12 | 4:44:04 | 0:51:39 | 3:52:25 | ovh | master | centos | 7.4 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/strays.yaml whitelist_health.yaml} | 6 | |
fail | 3370947 | 2018-12-17 05:21:31 | 2019-01-01 11:06:00 | 2019-01-01 12:12:00 | 1:06:00 | 0:09:13 | 0:56:47 | ovh | master | rhel | 7.5 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/kclient_workunit_suites_iozone.yaml} | 3 | |
Failure Reason:
Command failed on ovh094 with status 1: '\n sudo yum -y install bison\n ' |
||||||||||||||
fail | 3370948 | 2018-12-17 05:21:32 | 2019-01-01 11:06:00 | 2019-01-01 12:14:00 | 1:08:00 | 0:09:05 | 0:58:55 | ovh | master | rhel | 7.5 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/kclient_workunit_suites_pjd.yaml} | 3 | |
Failure Reason:
Command failed on ovh010 with status 1: '\n sudo yum -y install bison\n ' |
||||||||||||||
fail | 3370949 | 2018-12-17 05:21:33 | 2019-01-01 11:08:15 | 2019-01-01 12:22:16 | 1:14:01 | ovh | master | ubuntu | 18.04 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/volume-client.yaml whitelist_health.yaml} | 6 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh068.front.sepia.ceph.com |
||||||||||||||
fail | 3370950 | 2018-12-17 05:21:34 | 2019-01-01 11:12:10 | 2019-01-01 12:26:10 | 1:14:00 | 0:08:43 | 1:05:17 | ovh | master | rhel | 7.5 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/kclient_workunit_trivial_sync.yaml} | 3 | |
Failure Reason:
Command failed on ovh096 with status 1: '\n sudo yum -y install bison\n ' |
||||||||||||||
fail | 3370951 | 2018-12-17 05:21:34 | 2019-01-01 11:13:12 | 2019-01-01 11:43:12 | 0:30:00 | ovh | master | ubuntu | 18.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/kclient_workunit_direct_io.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh036.front.sepia.ceph.com |
||||||||||||||
fail | 3370952 | 2018-12-17 05:21:35 | 2019-01-01 11:22:11 | 2019-01-01 11:54:10 | 0:31:59 | ovh | master | ubuntu | 18.04 | kcephfs/mixed-clients/{begin.yaml clusters/1-mds-2-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/kernel_cfuse_workunits_dbench_iozone.yaml} | 4 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh023.front.sepia.ceph.com |
||||||||||||||
fail | 3370953 | 2018-12-17 05:21:36 | 2019-01-01 11:23:55 | 2019-01-01 20:52:03 | 9:28:08 | 0:11:14 | 9:16:54 | ovh | master | rhel | 7.5 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/auto-repair.yaml whitelist_health.yaml} | 6 | |
Failure Reason:
Command failed on ovh066 with status 1: '\n sudo yum -y install bison\n ' |
||||||||||||||
fail | 3370954 | 2018-12-17 05:21:36 | 2019-01-01 11:29:45 | 2019-01-01 12:51:45 | 1:22:00 | 1:06:54 | 0:15:06 | ovh | master | ubuntu | 16.04 | kcephfs/thrash/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} thrashers/default.yaml thrashosds-health.yaml whitelist_health.yaml workloads/kclient_workunit_suites_ffsb.yaml} | 3 | |
Failure Reason:
"2019-01-01 12:06:12.493429 mon.b mon.0 158.69.73.137:6789/0 1762 : cluster [WRN] Health check failed: 1 MDSs report slow metadata IOs (MDS_SLOW_METADATA_IO)" in cluster log |
||||||||||||||
fail | 3370955 | 2018-12-17 05:21:37 | 2019-01-01 11:33:47 | 2019-01-01 12:33:47 | 1:00:00 | 0:09:24 | 0:50:36 | ovh | master | rhel | 7.5 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/kclient_workunit_kernel_untar_build.yaml} | 3 | |
Failure Reason:
Command failed on ovh031 with status 1: '\n sudo yum -y install bison\n ' |
||||||||||||||
fail | 3370956 | 2018-12-17 05:21:38 | 2019-01-01 11:36:50 | 2019-01-01 14:18:52 | 2:42:02 | 0:11:24 | 2:30:38 | ovh | master | rhel | 7.5 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/backtrace.yaml whitelist_health.yaml} | 6 | |
Failure Reason:
Command failed on ovh015 with status 1: '\n sudo yum -y install bison\n ' |
||||||||||||||
fail | 3370957 | 2018-12-17 05:21:38 | 2019-01-01 11:37:18 | 2019-01-01 12:05:17 | 0:27:59 | ovh | master | ubuntu | 18.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/kclient_workunit_misc.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh028.front.sepia.ceph.com |
||||||||||||||
pass | 3370958 | 2018-12-17 05:21:39 | 2019-01-01 11:37:29 | 2019-01-01 12:43:30 | 1:06:01 | 0:27:33 | 0:38:28 | ovh | master | centos | 7.4 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/kclient_workunit_o_trunc.yaml} | 3 | |
fail | 3370959 | 2018-12-17 05:21:40 | 2019-01-01 11:38:14 | 2019-01-01 16:20:17 | 4:42:03 | 0:11:40 | 4:30:23 | ovh | master | rhel | 7.5 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/client-limits.yaml whitelist_health.yaml} | 6 | |
Failure Reason:
Command failed on ovh033 with status 1: '\n sudo yum -y install bison\n ' |
||||||||||||||
fail | 3370960 | 2018-12-17 05:21:41 | 2019-01-01 11:43:14 | 2019-01-01 13:43:15 | 2:00:01 | 1:27:46 | 0:32:15 | ovh | master | centos | 7.4 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/kclient_workunit_snaps.yaml} | 3 | |
Failure Reason:
"2019-01-01 12:21:50.321190 mon.b mon.0 158.69.69.157:6789/0 266 : cluster [WRN] Health check failed: 1 slow ops, oldest one blocked for 121 sec, mon.c has slow ops (SLOW_OPS)" in cluster log |
||||||||||||||
fail | 3370961 | 2018-12-17 05:21:41 | 2019-01-01 11:44:13 | 2019-01-01 12:38:12 | 0:53:59 | 0:08:46 | 0:45:13 | ovh | master | rhel | 7.5 | kcephfs/thrash/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} thrashers/mds.yaml thrashosds-health.yaml whitelist_health.yaml workloads/kclient_workunit_suites_iozone.yaml} | 3 | |
Failure Reason:
Command failed on ovh088 with status 1: '\n sudo yum -y install bison\n ' |
||||||||||||||
dead | 3370962 | 2018-12-17 05:21:42 | 2019-01-01 11:46:22 | 2019-01-02 00:03:21 | 12:16:59 | ovh | master | centos | 7.4 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/client-recovery.yaml whitelist_health.yaml} | 6 | |||
fail | 3370963 | 2018-12-17 05:21:43 | 2019-01-01 11:50:18 | 2019-01-01 12:52:18 | 1:02:00 | 0:07:39 | 0:54:21 | ovh | master | rhel | 7.5 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/kclient_workunit_suites_dbench.yaml} | 3 | |
Failure Reason:
Command failed on ovh058 with status 1: '\n sudo yum -y install bison\n ' |
||||||||||||||
fail | 3370964 | 2018-12-17 05:21:44 | 2019-01-01 11:54:12 | 2019-01-01 12:48:12 | 0:54:00 | 0:38:28 | 0:15:32 | ovh | master | centos | 7.4 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/kclient_workunit_suites_ffsb.yaml} | 3 | |
Failure Reason:
"2019-01-01 12:33:09.623705 mon.a mon.0 158.69.67.40:6789/0 148 : cluster [WRN] Health check failed: 1 MDSs report slow metadata IOs (MDS_SLOW_METADATA_IO)" in cluster log |
||||||||||||||
pass | 3370965 | 2018-12-17 05:21:44 | 2019-01-01 11:59:11 | 2019-01-01 14:05:12 | 2:06:01 | 0:22:37 | 1:43:24 | ovh | master | ubuntu | 16.04 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/config-commands.yaml whitelist_health.yaml} | 6 | |
pass | 3370966 | 2018-12-17 05:21:45 | 2019-01-01 12:00:58 | 2019-01-01 12:38:57 | 0:37:59 | 0:24:41 | 0:13:18 | ovh | master | ubuntu | 16.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/kclient_workunit_suites_fsstress.yaml} | 3 | |
fail | 3370967 | 2018-12-17 05:21:46 | 2019-01-01 12:01:17 | 2019-01-01 13:01:17 | 1:00:00 | 0:09:00 | 0:51:00 | ovh | master | rhel | 7.5 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/kclient_workunit_suites_fsx.yaml} | 3 | |
Failure Reason:
Command failed on ovh056 with status 1: '\n sudo yum -y install bison\n ' |
||||||||||||||
fail | 3370968 | 2018-12-17 05:21:47 | 2019-01-01 12:02:26 | 2019-01-01 12:42:25 | 0:39:59 | ovh | master | ubuntu | 18.04 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/damage.yaml whitelist_health.yaml} | 6 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh007.front.sepia.ceph.com |
||||||||||||||
pass | 3370969 | 2018-12-17 05:21:47 | 2019-01-01 12:03:10 | 2019-01-01 13:01:10 | 0:58:00 | 0:39:10 | 0:18:50 | ovh | master | centos | 7.4 | kcephfs/thrash/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} thrashers/mon.yaml thrashosds-health.yaml whitelist_health.yaml workloads/kclient_workunit_suites_ffsb.yaml} | 3 | |
pass | 3370970 | 2018-12-17 05:21:48 | 2019-01-01 12:04:07 | 2019-01-01 12:46:06 | 0:41:59 | 0:16:59 | 0:25:00 | ovh | master | ubuntu | 16.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/kclient_workunit_suites_fsync.yaml} | 3 | |
pass | 3370971 | 2018-12-17 05:21:49 | 2019-01-01 12:05:19 | 2019-01-01 13:37:20 | 1:32:01 | 0:41:34 | 0:50:27 | ovh | master | centos | 7.4 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/data-scan.yaml whitelist_health.yaml} | 6 | |
fail | 3370972 | 2018-12-17 05:21:49 | 2019-01-01 12:12:03 | 2019-01-01 13:14:02 | 1:01:59 | 0:10:00 | 0:51:59 | ovh | master | rhel | 7.5 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/kclient_workunit_suites_iozone.yaml} | 3 | |
Failure Reason:
Command failed on ovh081 with status 1: '\n sudo yum -y install bison\n ' |
||||||||||||||
fail | 3370973 | 2018-12-17 05:21:50 | 2019-01-01 12:12:15 | 2019-01-01 13:18:15 | 1:06:00 | 0:10:02 | 0:55:58 | ovh | master | rhel | 7.5 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/kclient_workunit_suites_pjd.yaml} | 3 | |
Failure Reason:
Command failed on ovh010 with status 1: '\n sudo yum -y install bison\n ' |
||||||||||||||
fail | 3370974 | 2018-12-17 05:21:51 | 2019-01-01 12:14:02 | 2019-01-01 14:16:03 | 2:02:01 | 0:12:13 | 1:49:48 | ovh | master | rhel | 7.5 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/failover.yaml whitelist_health.yaml} | 6 | |
Failure Reason:
Command failed on ovh039 with status 1: '\n sudo yum -y install bison\n ' |
||||||||||||||
pass | 3370975 | 2018-12-17 05:21:52 | 2019-01-01 12:14:06 | 2019-01-01 12:52:06 | 0:38:00 | 0:17:56 | 0:20:04 | ovh | master | centos | 7.4 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/kclient_workunit_trivial_sync.yaml} | 3 | |
pass | 3370976 | 2018-12-17 05:21:52 | 2019-01-01 12:16:00 | 2019-01-01 12:49:59 | 0:33:59 | 0:20:17 | 0:13:42 | ovh | master | ubuntu | 16.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/kclient_workunit_direct_io.yaml} | 3 | |
fail | 3370977 | 2018-12-17 05:21:53 | 2019-01-01 12:20:04 | 2019-01-01 12:48:03 | 0:27:59 | ovh | master | ubuntu | 18.04 | kcephfs/mixed-clients/{begin.yaml clusters/1-mds-2-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/kernel_cfuse_workunits_untarbuild_blogbench.yaml} | 4 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh095.front.sepia.ceph.com |
||||||||||||||
fail | 3370978 | 2018-12-17 05:21:54 | 2019-01-01 12:21:00 | 2019-01-01 16:39:03 | 4:18:03 | 0:10:42 | 4:07:21 | ovh | master | rhel | 7.5 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/forward-scrub.yaml whitelist_health.yaml} | 6 | |
Failure Reason:
Command failed on ovh041 with status 1: '\n sudo yum -y install bison\n ' |
||||||||||||||
pass | 3370979 | 2018-12-17 05:21:54 | 2019-01-01 12:22:29 | 2019-01-01 13:12:29 | 0:50:00 | 0:40:32 | 0:09:28 | ovh | master | ubuntu | 16.04 | kcephfs/thrash/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} thrashers/default.yaml thrashosds-health.yaml whitelist_health.yaml workloads/kclient_workunit_suites_iozone.yaml} | 3 | |
pass | 3370980 | 2018-12-17 05:21:55 | 2019-01-01 12:25:24 | 2019-01-01 13:31:24 | 1:06:00 | 0:52:16 | 0:13:44 | ovh | master | ubuntu | 16.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/kclient_workunit_kernel_untar_build.yaml} | 3 | |
fail | 3370981 | 2018-12-17 05:21:56 | 2019-01-01 12:26:12 | 2019-01-01 13:40:12 | 1:14:00 | 0:28:30 | 0:45:30 | ovh | master | centos | 7.4 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/journal-repair.yaml whitelist_health.yaml} | 6 | |
Failure Reason:
Test failure: test_reset (tasks.cephfs.test_journal_repair.TestJournalRepair), test_reset (tasks.cephfs.test_journal_repair.TestJournalRepair) |
||||||||||||||
fail | 3370982 | 2018-12-17 05:21:57 | 2019-01-01 12:33:49 | 2019-01-01 12:51:48 | 0:17:59 | ovh | master | ubuntu | 18.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/kclient_workunit_misc.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh098.front.sepia.ceph.com |
||||||||||||||
fail | 3370983 | 2018-12-17 05:21:57 | 2019-01-01 12:33:56 | 2019-01-01 13:31:56 | 0:58:00 | 0:08:39 | 0:49:21 | ovh | master | rhel | 7.5 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/kclient_workunit_o_trunc.yaml} | 3 | |
Failure Reason:
Command failed on ovh042 with status 1: '\n sudo yum -y install bison\n ' |
||||||||||||||
pass | 3370984 | 2018-12-17 05:21:58 | 2019-01-01 12:36:47 | 2019-01-01 14:38:48 | 2:02:01 | 0:23:01 | 1:39:00 | ovh | master | centos | 7.4 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/mds-flush.yaml whitelist_health.yaml} | 6 | |
pass | 3370985 | 2018-12-17 05:21:59 | 2019-01-01 12:38:14 | 2019-01-01 14:44:16 | 2:06:02 | 1:34:39 | 0:31:23 | ovh | master | ubuntu | 18.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/kclient_workunit_snaps.yaml} | 3 | |
fail | 3370986 | 2018-12-17 05:21:59 | 2019-01-01 12:38:59 | 2019-01-01 13:30:58 | 0:51:59 | 0:33:57 | 0:18:02 | ovh | master | ubuntu | 16.04 | kcephfs/thrash/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} thrashers/mds.yaml thrashosds-health.yaml whitelist_health.yaml workloads/kclient_workunit_suites_ffsb.yaml} | 3 | |
Failure Reason:
Command failed (workunit test suites/ffsb.sh) on ovh038 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=a64198e24492cde465cb3235813a30945003456a TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/ffsb.sh' |
||||||||||||||
fail | 3370987 | 2018-12-17 05:22:00 | 2019-01-01 12:42:37 | 2019-01-01 19:52:44 | 7:10:07 | 0:11:35 | 6:58:32 | ovh | master | rhel | 7.5 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/mds-full.yaml whitelist_health.yaml} | 6 | |
Failure Reason:
Command failed on ovh050 with status 1: '\n sudo yum -y install bison\n ' |
||||||||||||||
fail | 3370988 | 2018-12-17 05:22:01 | 2019-01-01 12:43:31 | 2019-01-01 13:09:30 | 0:25:59 | ovh | master | ubuntu | 18.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/kclient_workunit_suites_dbench.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh007.front.sepia.ceph.com |
||||||||||||||
fail | 3370989 | 2018-12-17 05:22:02 | 2019-01-01 12:46:08 | 2019-01-01 13:00:08 | 0:14:00 | ovh | master | ubuntu | 18.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/kclient_workunit_suites_ffsb.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh097.front.sepia.ceph.com |
||||||||||||||
pass | 3370990 | 2018-12-17 05:22:02 | 2019-01-01 12:48:05 | 2019-01-01 15:14:07 | 2:26:02 | 0:24:19 | 2:01:43 | ovh | master | centos | 7.4 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/pool-perm.yaml whitelist_health.yaml} | 6 | |
fail | 3370991 | 2018-12-17 05:22:03 | 2019-01-01 12:48:14 | 2019-01-01 13:22:13 | 0:33:59 | ovh | master | ubuntu | 18.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/kclient_workunit_suites_fsstress.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh057.front.sepia.ceph.com |
||||||||||||||
pass | 3370992 | 2018-12-17 05:22:04 | 2019-01-01 12:50:01 | 2019-01-01 13:50:01 | 1:00:00 | 0:46:44 | 0:13:16 | ovh | master | ubuntu | 16.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/kclient_workunit_suites_fsx.yaml} | 3 | |
pass | 3370993 | 2018-12-17 05:22:04 | 2019-01-01 12:51:48 | 2019-01-01 19:05:52 | 6:14:04 | 0:27:46 | 5:46:18 | ovh | master | centos | 7.4 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/sessionmap.yaml whitelist_health.yaml} | 6 | |
fail | 3370994 | 2018-12-17 05:22:05 | 2019-01-01 12:51:49 | 2019-01-01 13:19:49 | 0:28:00 | ovh | master | ubuntu | 18.04 | kcephfs/thrash/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} thrashers/mon.yaml thrashosds-health.yaml whitelist_health.yaml workloads/kclient_workunit_suites_iozone.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh059.front.sepia.ceph.com |
||||||||||||||
pass | 3370995 | 2018-12-17 05:22:06 | 2019-01-01 12:52:10 | 2019-01-01 13:32:09 | 0:39:59 | 0:18:35 | 0:21:24 | ovh | master | centos | 7.4 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/kclient_workunit_suites_fsync.yaml} | 3 | |
pass | 3370996 | 2018-12-17 05:22:07 | 2019-01-01 12:52:19 | 2019-01-01 15:34:21 | 2:42:02 | 0:45:31 | 1:56:31 | ovh | master | ubuntu | 16.04 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/strays.yaml whitelist_health.yaml} | 6 | |
fail | 3370997 | 2018-12-17 05:22:07 | 2019-01-01 12:54:32 | 2019-01-01 13:18:31 | 0:23:59 | ovh | master | ubuntu | 18.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/kclient_workunit_suites_iozone.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh003.front.sepia.ceph.com |
||||||||||||||
pass | 3370998 | 2018-12-17 05:22:08 | 2019-01-01 13:00:17 | 2019-01-01 13:50:17 | 0:50:00 | 0:19:44 | 0:30:16 | ovh | master | centos | 7.4 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/kclient_workunit_suites_pjd.yaml} | 3 | |
fail | 3370999 | 2018-12-17 05:22:09 | 2019-01-01 13:01:11 | 2019-01-01 15:07:12 | 2:06:01 | 0:10:43 | 1:55:18 | ovh | master | rhel | 7.5 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/volume-client.yaml whitelist_health.yaml} | 6 | |
Failure Reason:
Command failed on ovh080 with status 1: '\n sudo yum -y install bison\n ' |
||||||||||||||
pass | 3371000 | 2018-12-17 05:22:09 | 2019-01-01 13:01:30 | 2019-01-01 13:33:30 | 0:32:00 | 0:18:02 | 0:13:58 | ovh | master | ubuntu | 16.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/kclient_workunit_trivial_sync.yaml} | 3 | |
pass | 3371001 | 2018-12-17 05:22:10 | 2019-01-01 13:09:44 | 2019-01-01 13:45:43 | 0:35:59 | 0:18:01 | 0:17:58 | ovh | master | ubuntu | 16.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/kclient_workunit_direct_io.yaml} | 3 | |
fail | 3371002 | 2018-12-17 05:22:11 | 2019-01-01 13:12:42 | 2019-01-01 13:38:42 | 0:26:00 | ovh | master | ubuntu | 18.04 | kcephfs/mixed-clients/{begin.yaml clusters/1-mds-2-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/kernel_cfuse_workunits_dbench_iozone.yaml} | 4 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh078.front.sepia.ceph.com |
||||||||||||||
fail | 3371003 | 2018-12-17 05:22:12 | 2019-01-01 13:14:04 | 2019-01-01 17:00:07 | 3:46:03 | 0:10:37 | 3:35:26 | ovh | master | rhel | 7.5 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/auto-repair.yaml whitelist_health.yaml} | 6 | |
Failure Reason:
Command failed on ovh028 with status 1: '\n sudo yum -y install bison\n ' |
||||||||||||||
fail | 3371004 | 2018-12-17 05:22:12 | 2019-01-01 13:18:18 | 2019-01-01 14:20:18 | 1:02:00 | 0:45:13 | 0:16:47 | ovh | master | ubuntu | 16.04 | kcephfs/thrash/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} thrashers/default.yaml thrashosds-health.yaml whitelist_health.yaml workloads/kclient_workunit_suites_ffsb.yaml} | 3 | |
Failure Reason:
"2019-01-01 13:52:21.973880 mon.b mon.0 158.69.69.88:6789/0 1790 : cluster [WRN] Health check failed: 1 MDSs report slow metadata IOs (MDS_SLOW_METADATA_IO)" in cluster log |
||||||||||||||
fail | 3371005 | 2018-12-17 05:22:13 | 2019-01-01 13:18:33 | 2019-01-01 13:48:32 | 0:29:59 | ovh | master | ubuntu | 18.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/kclient_workunit_kernel_untar_build.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh100.front.sepia.ceph.com |
||||||||||||||
pass | 3371006 | 2018-12-17 05:22:14 | 2019-01-01 13:19:51 | 2019-01-01 17:25:54 | 4:06:03 | 0:20:43 | 3:45:20 | ovh | master | centos | 7.4 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/backtrace.yaml whitelist_health.yaml} | 6 | |
pass | 3371007 | 2018-12-17 05:22:14 | 2019-01-01 13:22:15 | 2019-01-01 14:56:16 | 1:34:01 | 1:01:26 | 0:32:35 | ovh | master | ubuntu | 16.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/kclient_workunit_misc.yaml} | 3 | |
pass | 3371008 | 2018-12-17 05:22:15 | 2019-01-01 13:29:47 | 2019-01-01 14:15:46 | 0:45:59 | 0:26:22 | 0:19:37 | ovh | master | ubuntu | 16.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/kclient_workunit_o_trunc.yaml} | 3 | |
pass | 3371009 | 2018-12-17 05:22:16 | 2019-01-01 13:31:11 | 2019-01-01 19:43:16 | 6:12:05 | 0:27:57 | 5:44:08 | ovh | master | ubuntu | 16.04 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/client-limits.yaml whitelist_health.yaml} | 6 | |
fail | 3371010 | 2018-12-17 05:22:17 | 2019-01-01 13:31:25 | 2019-01-01 15:09:26 | 1:38:01 | 1:20:21 | 0:17:40 | ovh | master | centos | 7.4 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/kclient_workunit_snaps.yaml} | 3 | |
Failure Reason:
"2019-01-01 14:45:17.821528 mon.b mon.0 158.69.65.96:6789/0 887 : cluster [ERR] Health check failed: mon c is very low on available space (MON_DISK_CRIT)" in cluster log |
||||||||||||||
fail | 3371011 | 2018-12-17 05:22:17 | 2019-01-01 13:31:57 | 2019-01-01 13:51:56 | 0:19:59 | ovh | master | ubuntu | 18.04 | kcephfs/thrash/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} thrashers/mds.yaml thrashosds-health.yaml whitelist_health.yaml workloads/kclient_workunit_suites_iozone.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh045.front.sepia.ceph.com |
||||||||||||||
fail | 3371012 | 2018-12-17 05:22:18 | 2019-01-01 13:32:24 | 2019-01-02 00:40:35 | 11:08:11 | ovh | master | ubuntu | 18.04 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/client-recovery.yaml whitelist_health.yaml} | 6 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh091.front.sepia.ceph.com |
||||||||||||||
pass | 3371013 | 2018-12-17 05:22:19 | 2019-01-01 13:33:44 | 2019-01-01 14:35:44 | 1:02:00 | 0:44:08 | 0:17:52 | ovh | master | ubuntu | 16.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/kclient_workunit_suites_dbench.yaml} | 3 | |
pass | 3371014 | 2018-12-17 05:22:19 | 2019-01-01 13:37:34 | 2019-01-01 14:39:34 | 1:02:00 | 0:41:01 | 0:20:59 | ovh | master | ubuntu | 16.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/kclient_workunit_suites_ffsb.yaml} | 3 | |
dead | 3371015 | 2018-12-17 05:22:20 | 2019-01-01 13:38:55 | 2019-01-02 01:41:07 | 12:02:12 | ovh | master | rhel | 7.5 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/config-commands.yaml whitelist_health.yaml} | — | |||
fail | 3371016 | 2018-12-17 05:22:21 | 2019-01-01 13:40:27 | 2019-01-01 14:04:27 | 0:24:00 | ovh | master | ubuntu | 18.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/kclient_workunit_suites_fsstress.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh060.front.sepia.ceph.com |
||||||||||||||
fail | 3371017 | 2018-12-17 05:22:22 | 2019-01-01 13:42:35 | 2019-01-01 14:50:35 | 1:08:00 | 0:08:50 | 0:59:10 | ovh | master | rhel | 7.5 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/kclient_workunit_suites_fsx.yaml} | 3 | |
Failure Reason:
Command failed on ovh004 with status 1: '\n sudo yum -y install bison\n ' |
||||||||||||||
fail | 3371018 | 2018-12-17 05:22:22 | 2019-01-01 13:43:16 | 2019-01-01 16:47:18 | 3:04:02 | 0:10:31 | 2:53:31 | ovh | master | rhel | 7.5 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/damage.yaml whitelist_health.yaml} | 6 | |
Failure Reason:
Command failed on ovh002 with status 1: '\n sudo yum -y install bison\n ' |
||||||||||||||
dead | 3371019 | 2018-12-17 05:22:23 | 2019-01-01 13:45:56 | 2019-01-02 01:53:12 | 12:07:16 | ovh | master | centos | 7.4 | kcephfs/thrash/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} thrashers/mon.yaml thrashosds-health.yaml whitelist_health.yaml workloads/kclient_workunit_suites_ffsb.yaml} | 3 | |||
fail | 3371020 | 2018-12-17 05:22:24 | 2019-01-01 13:48:45 | 2019-01-01 14:56:45 | 1:08:00 | 0:09:03 | 0:58:57 | ovh | master | rhel | 7.5 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/kclient_workunit_suites_fsync.yaml} | 3 | |
Failure Reason:
Command failed on ovh075 with status 1: '\n sudo yum -y install bison\n ' |
||||||||||||||
pass | 3371021 | 2018-12-17 05:22:25 | 2019-01-01 13:50:15 | 2019-01-01 20:48:20 | 6:58:05 | 0:36:17 | 6:21:48 | ovh | master | ubuntu | 16.04 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/data-scan.yaml whitelist_health.yaml} | 6 | |
fail | 3371022 | 2018-12-17 05:22:25 | 2019-01-01 13:50:18 | 2019-01-01 14:50:18 | 1:00:00 | 0:09:26 | 0:50:34 | ovh | master | rhel | 7.5 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/kclient_workunit_suites_iozone.yaml} | 3 | |
Failure Reason:
Command failed on ovh030 with status 1: '\n sudo yum -y install bison\n ' |
||||||||||||||
pass | 3371023 | 2018-12-17 05:22:26 | 2019-01-01 13:50:39 | 2019-01-01 14:26:39 | 0:36:00 | 0:18:15 | 0:17:45 | ovh | master | ubuntu | 16.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/kclient_workunit_suites_pjd.yaml} | 3 | |
fail | 3371024 | 2018-12-17 05:22:27 | 2019-01-01 13:52:09 | 2019-01-01 18:06:12 | 4:14:03 | 0:12:26 | 4:01:37 | ovh | master | rhel | 7.5 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/failover.yaml whitelist_health.yaml} | 6 | |
Failure Reason:
Command failed on ovh040 with status 1: '\n sudo yum -y install bison\n ' |
||||||||||||||
pass | 3371025 | 2018-12-17 05:22:28 | 2019-01-01 14:01:46 | 2019-01-01 14:49:46 | 0:48:00 | 0:16:14 | 0:31:46 | ovh | master | centos | 7.4 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/kclient_workunit_trivial_sync.yaml} | 3 | |
pass | 3371026 | 2018-12-17 05:22:29 | 2019-01-01 14:02:07 | 2019-01-01 14:56:06 | 0:53:59 | 0:18:11 | 0:35:48 | ovh | master | centos | 7.4 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/kclient_workunit_direct_io.yaml} | 3 | |
pass | 3371027 | 2018-12-17 05:22:29 | 2019-01-01 14:04:39 | 2019-01-01 15:48:40 | 1:44:01 | 1:21:27 | 0:22:34 | ovh | master | ubuntu | 16.04 | kcephfs/mixed-clients/{begin.yaml clusters/1-mds-2-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/kernel_cfuse_workunits_untarbuild_blogbench.yaml} | 4 | |
pass | 3371028 | 2018-12-17 05:22:30 | 2019-01-01 14:05:13 | 2019-01-01 16:35:14 | 2:30:01 | 0:27:42 | 2:02:19 | ovh | master | centos | 7.4 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/forward-scrub.yaml whitelist_health.yaml} | 6 | |
pass | 3371029 | 2018-12-17 05:22:31 | 2019-01-01 14:15:47 | 2019-01-01 15:27:47 | 1:12:00 | 0:44:20 | 0:27:40 | ovh | master | centos | 7.4 | kcephfs/thrash/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} thrashers/default.yaml thrashosds-health.yaml whitelist_health.yaml workloads/kclient_workunit_suites_iozone.yaml} | 3 | |
pass | 3371030 | 2018-12-17 05:22:32 | 2019-01-01 14:16:04 | 2019-01-01 15:32:04 | 1:16:00 | 1:05:19 | 0:10:41 | ovh | master | ubuntu | 16.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/kclient_workunit_kernel_untar_build.yaml} | 3 | |
fail | 3371031 | 2018-12-17 05:22:32 | 2019-01-01 14:19:05 | 2019-01-01 17:21:07 | 3:02:02 | 0:10:56 | 2:51:06 | ovh | master | rhel | 7.5 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/journal-repair.yaml whitelist_health.yaml} | 6 | |
Failure Reason:
Command failed on ovh093 with status 1: '\n sudo yum -y install bison\n ' |
||||||||||||||
fail | 3371032 | 2018-12-17 05:22:33 | 2019-01-01 14:20:02 | 2019-01-01 15:24:02 | 1:04:00 | 0:10:08 | 0:53:52 | ovh | master | rhel | 7.5 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/kclient_workunit_misc.yaml} | 3 | |
Failure Reason:
Command failed on ovh010 with status 1: '\n sudo yum -y install bison\n ' |
||||||||||||||
pass | 3371033 | 2018-12-17 05:22:34 | 2019-01-01 14:20:31 | 2019-01-01 15:26:31 | 1:06:00 | 0:26:35 | 0:39:25 | ovh | master | centos | 7.4 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/kclient_workunit_o_trunc.yaml} | 3 | |
dead | 3371034 | 2018-12-17 05:22:34 | 2019-01-01 14:26:53 | 2019-01-02 02:43:39 | 12:16:46 | ovh | master | centos | 7.4 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/mds-flush.yaml whitelist_health.yaml} | 6 | |||
fail | 3371035 | 2018-12-17 05:22:35 | 2019-01-01 14:33:58 | 2019-01-01 14:53:57 | 0:19:59 | ovh | master | ubuntu | 18.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/kclient_workunit_snaps.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh092.front.sepia.ceph.com |
||||||||||||||
fail | 3371036 | 2018-12-17 05:22:36 | 2019-01-01 14:35:56 | 2019-01-01 15:53:56 | 1:18:00 | 0:50:26 | 0:27:34 | ovh | master | centos | 7.4 | kcephfs/thrash/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} thrashers/mds.yaml thrashosds-health.yaml whitelist_health.yaml workloads/kclient_workunit_suites_ffsb.yaml} | 3 | |
Failure Reason:
Command failed (workunit test suites/ffsb.sh) on ovh057 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=a64198e24492cde465cb3235813a30945003456a TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/ffsb.sh' |
||||||||||||||
fail | 3371037 | 2018-12-17 05:22:37 | 2019-01-01 14:39:00 | 2019-01-01 19:03:03 | 4:24:03 | 0:10:55 | 4:13:08 | ovh | master | rhel | 7.5 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/mds-full.yaml whitelist_health.yaml} | 6 | |
Failure Reason:
Command failed on ovh075 with status 1: '\n sudo yum -y install bison\n ' |
||||||||||||||
pass | 3371038 | 2018-12-17 05:22:37 | 2019-01-01 14:39:35 | 2019-01-01 15:35:35 | 0:56:00 | 0:43:21 | 0:12:39 | ovh | master | ubuntu | 16.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/kclient_workunit_suites_dbench.yaml} | 3 | |
fail | 3371039 | 2018-12-17 05:22:38 | 2019-01-01 14:44:27 | 2019-01-01 15:06:27 | 0:22:00 | ovh | master | ubuntu | 18.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/kclient_workunit_suites_ffsb.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh049.front.sepia.ceph.com |
||||||||||||||
pass | 3371040 | 2018-12-17 05:22:39 | 2019-01-01 14:49:58 | 2019-01-01 18:24:00 | 3:34:02 | 0:26:13 | 3:07:49 | ovh | master | ubuntu | 16.04 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/pool-perm.yaml whitelist_health.yaml} | 6 | |
fail | 3371041 | 2018-12-17 05:22:40 | 2019-01-01 14:50:19 | 2019-01-01 15:08:18 | 0:17:59 | ovh | master | ubuntu | 18.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/kclient_workunit_suites_fsstress.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh098.front.sepia.ceph.com |
||||||||||||||
fail | 3371042 | 2018-12-17 05:22:40 | 2019-01-01 14:50:36 | 2019-01-01 16:02:36 | 1:12:00 | 0:08:44 | 1:03:16 | ovh | master | rhel | 7.5 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/kclient_workunit_suites_fsx.yaml} | 3 | |
Failure Reason:
Command failed on ovh030 with status 1: '\n sudo yum -y install bison\n ' |
||||||||||||||
pass | 3371043 | 2018-12-17 05:22:41 | 2019-01-01 14:54:08 | 2019-01-01 19:20:12 | 4:26:04 | 0:24:47 | 4:01:17 | ovh | master | ubuntu | 16.04 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/sessionmap.yaml whitelist_health.yaml} | 6 | |
fail | 3371044 | 2018-12-17 05:22:42 | 2019-01-01 14:56:18 | 2019-01-01 16:10:18 | 1:14:00 | 0:08:37 | 1:05:23 | ovh | master | rhel | 7.5 | kcephfs/thrash/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} thrashers/mon.yaml thrashosds-health.yaml whitelist_health.yaml workloads/kclient_workunit_suites_iozone.yaml} | 3 | |
Failure Reason:
Command failed on ovh027 with status 1: '\n sudo yum -y install bison\n ' |
||||||||||||||
pass | 3371045 | 2018-12-17 05:22:43 | 2019-01-01 14:56:18 | 2019-01-01 15:46:18 | 0:50:00 | 0:20:37 | 0:29:23 | ovh | master | ubuntu | 18.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/kclient_workunit_suites_fsync.yaml} | 3 | |
pass | 3371046 | 2018-12-17 05:22:43 | 2019-01-01 14:56:46 | 2019-01-01 18:12:49 | 3:16:03 | 0:49:54 | 2:26:09 | ovh | master | centos | 7.4 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/strays.yaml whitelist_health.yaml} | 6 | |
fail | 3371047 | 2018-12-17 05:22:44 | 2019-01-01 15:06:39 | 2019-01-01 15:28:38 | 0:21:59 | ovh | master | ubuntu | 18.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/kclient_workunit_suites_iozone.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh058.front.sepia.ceph.com |
||||||||||||||
fail | 3371048 | 2018-12-17 05:22:45 | 2019-01-01 15:07:14 | 2019-01-01 15:29:13 | 0:21:59 | ovh | master | ubuntu | 18.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/kclient_workunit_suites_pjd.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh022.front.sepia.ceph.com |
||||||||||||||
dead | 3371049 | 2018-12-17 05:22:46 | 2019-01-01 15:08:31 | 2019-01-02 03:10:42 | 12:02:11 | ovh | master | rhel | 7.5 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/volume-client.yaml whitelist_health.yaml} | — | |||
pass | 3371050 | 2018-12-17 05:22:46 | 2019-01-01 15:09:27 | 2019-01-01 15:53:27 | 0:44:00 | 0:18:21 | 0:25:39 | ovh | master | centos | 7.4 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/kclient_workunit_trivial_sync.yaml} | 3 | |
pass | 3371051 | 2018-12-17 05:22:47 | 2019-01-01 15:14:18 | 2019-01-01 16:12:18 | 0:58:00 | 0:20:49 | 0:37:11 | ovh | master | centos | 7.4 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/kclient_workunit_direct_io.yaml} | 3 | |
fail | 3371052 | 2018-12-17 05:22:48 | 2019-01-01 15:24:13 | 2019-01-01 16:44:14 | 1:20:01 | 0:09:16 | 1:10:45 | ovh | master | rhel | 7.5 | kcephfs/mixed-clients/{begin.yaml clusters/1-mds-2-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/kernel_cfuse_workunits_dbench_iozone.yaml} | 4 | |
Failure Reason:
Command failed on ovh058 with status 1: '\n sudo yum -y install bison\n ' |
||||||||||||||
pass | 3371053 | 2018-12-17 05:22:49 | 2019-01-01 15:26:42 | 2019-01-01 21:40:47 | 6:14:05 | 0:23:54 | 5:50:11 | ovh | master | ubuntu | 16.04 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/auto-repair.yaml whitelist_health.yaml} | 6 | |
fail | 3371054 | 2018-12-17 05:22:49 | 2019-01-01 15:28:00 | 2019-01-01 16:23:59 | 0:55:59 | 0:08:24 | 0:47:35 | ovh | master | rhel | 7.5 | kcephfs/thrash/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} thrashers/default.yaml thrashosds-health.yaml whitelist_health.yaml workloads/kclient_workunit_suites_ffsb.yaml} | 3 | |
Failure Reason:
Command failed on ovh035 with status 1: '\n sudo yum -y install bison\n ' |
||||||||||||||
fail | 3371055 | 2018-12-17 05:22:50 | 2019-01-01 15:28:39 | 2019-01-01 15:44:38 | 0:15:59 | ovh | master | ubuntu | 18.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/kclient_workunit_kernel_untar_build.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh097.front.sepia.ceph.com |
||||||||||||||
pass | 3371056 | 2018-12-17 05:22:51 | 2019-01-01 15:29:24 | 2019-01-01 17:55:26 | 2:26:02 | 0:20:06 | 2:05:56 | ovh | master | centos | 7.4 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/backtrace.yaml whitelist_health.yaml} | 6 | |
fail | 3371057 | 2018-12-17 05:22:52 | 2019-01-01 15:32:16 | 2019-01-01 16:36:16 | 1:04:00 | 0:08:35 | 0:55:25 | ovh | master | rhel | 7.5 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/kclient_workunit_misc.yaml} | 3 | |
Failure Reason:
Command failed on ovh032 with status 1: '\n sudo yum -y install bison\n ' |
||||||||||||||
fail | 3371058 | 2018-12-17 05:22:52 | 2019-01-01 15:34:33 | 2019-01-01 16:36:33 | 1:02:00 | 0:08:33 | 0:53:27 | ovh | master | rhel | 7.5 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/kclient_workunit_o_trunc.yaml} | 3 | |
Failure Reason:
Command failed on ovh052 with status 1: '\n sudo yum -y install bison\n ' |
||||||||||||||
dead | 3371059 | 2018-12-17 05:22:53 | 2019-01-01 15:35:47 | 2019-01-02 03:37:58 | 12:02:11 | ovh | master | ubuntu | 18.04 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/client-limits.yaml whitelist_health.yaml} | — | |||
fail | 3371060 | 2018-12-17 05:22:54 | 2019-01-01 15:44:49 | 2019-01-01 16:08:49 | 0:24:00 | ovh | master | ubuntu | 18.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/kclient_workunit_snaps.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh060.front.sepia.ceph.com |
||||||||||||||
pass | 3371061 | 2018-12-17 05:22:54 | 2019-01-01 15:46:23 | 2019-01-01 16:44:23 | 0:58:00 | 0:32:53 | 0:25:07 | ovh | master | centos | 7.4 | kcephfs/thrash/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} thrashers/mds.yaml thrashosds-health.yaml whitelist_health.yaml workloads/kclient_workunit_suites_iozone.yaml} | 3 | |
fail | 3371062 | 2018-12-17 05:22:55 | 2019-01-01 15:46:23 | 2019-01-01 17:44:24 | 1:58:01 | ovh | master | ubuntu | 18.04 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/client-recovery.yaml whitelist_health.yaml} | 6 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh070.front.sepia.ceph.com |
||||||||||||||
pass | 3371063 | 2018-12-17 05:22:56 | 2019-01-01 15:48:51 | 2019-01-01 16:44:51 | 0:56:00 | 0:42:50 | 0:13:10 | ovh | master | ubuntu | 16.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/kclient_workunit_suites_dbench.yaml} | 3 | |
fail | 3371064 | 2018-12-17 05:22:57 | 2019-01-01 15:53:38 | 2019-01-01 16:21:38 | 0:28:00 | ovh | master | ubuntu | 18.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/kclient_workunit_suites_ffsb.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh030.front.sepia.ceph.com |
||||||||||||||
fail | 3371065 | 2018-12-17 05:22:57 | 2019-01-01 15:53:57 | 2019-01-01 16:47:57 | 0:54:00 | ovh | master | ubuntu | 18.04 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/config-commands.yaml whitelist_health.yaml} | 6 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh064.front.sepia.ceph.com |
||||||||||||||
fail | 3371066 | 2018-12-17 05:22:58 | 2019-01-01 16:02:48 | 2019-01-01 16:24:47 | 0:21:59 | ovh | master | ubuntu | 18.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/kclient_workunit_suites_fsstress.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh100.front.sepia.ceph.com |
||||||||||||||
fail | 3371067 | 2018-12-17 05:22:59 | 2019-01-01 16:09:00 | 2019-01-01 16:36:59 | 0:27:59 | ovh | master | ubuntu | 18.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/kclient_workunit_suites_fsx.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh069.front.sepia.ceph.com |
||||||||||||||
fail | 3371068 | 2018-12-17 05:23:00 | 2019-01-01 16:10:29 | 2019-01-02 02:00:38 | 9:50:09 | ovh | master | ubuntu | 18.04 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/damage.yaml whitelist_health.yaml} | 6 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh080.front.sepia.ceph.com |
||||||||||||||
fail | 3371069 | 2018-12-17 05:23:00 | 2019-01-01 16:12:29 | 2019-01-01 17:04:29 | 0:52:00 | 0:07:16 | 0:44:44 | ovh | master | rhel | 7.5 | kcephfs/thrash/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} thrashers/mon.yaml thrashosds-health.yaml whitelist_health.yaml workloads/kclient_workunit_suites_ffsb.yaml} | 3 | |
Failure Reason:
Command failed on ovh038 with status 1: '\n sudo yum -y install bison\n ' |
||||||||||||||
fail | 3371070 | 2018-12-17 05:23:01 | 2019-01-01 16:20:28 | 2019-01-01 16:44:28 | 0:24:00 | ovh | master | ubuntu | 18.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/kclient_workunit_suites_fsync.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh035.front.sepia.ceph.com |
||||||||||||||
pass | 3371071 | 2018-12-17 05:23:02 | 2019-01-01 16:21:49 | 2019-01-01 20:53:52 | 4:32:03 | 0:41:04 | 3:50:59 | ovh | master | centos | 7.4 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/data-scan.yaml whitelist_health.yaml} | 6 | |
fail | 3371072 | 2018-12-17 05:23:03 | 2019-01-01 16:24:11 | 2019-01-01 17:22:11 | 0:58:00 | 0:09:30 | 0:48:30 | ovh | master | rhel | 7.5 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/kclient_workunit_suites_iozone.yaml} | 3 | |
Failure Reason:
Command failed on ovh011 with status 1: '\n sudo yum -y install bison\n ' |
||||||||||||||
fail | 3371073 | 2018-12-17 05:23:03 | 2019-01-01 16:24:48 | 2019-01-01 16:46:48 | 0:22:00 | ovh | master | ubuntu | 18.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/kclient_workunit_suites_pjd.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh052.front.sepia.ceph.com |
||||||||||||||
pass | 3371074 | 2018-12-17 05:23:04 | 2019-01-01 16:35:25 | 2019-01-01 18:31:26 | 1:56:01 | 0:37:09 | 1:18:52 | ovh | master | ubuntu | 16.04 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/failover.yaml whitelist_health.yaml} | 6 | |
pass | 3371075 | 2018-12-17 05:23:05 | 2019-01-01 16:36:17 | 2019-01-01 17:00:17 | 0:24:00 | 0:15:06 | 0:08:54 | ovh | master | ubuntu | 16.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/kclient_workunit_trivial_sync.yaml} | 3 | |
pass | 3371076 | 2018-12-17 05:23:06 | 2019-01-01 16:36:44 | 2019-01-01 17:26:44 | 0:50:00 | 0:25:13 | 0:24:47 | ovh | master | centos | 7.4 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/kclient_workunit_direct_io.yaml} | 3 | |
pass | 3371077 | 2018-12-17 05:23:07 | 2019-01-01 16:37:00 | 2019-01-01 18:25:01 | 1:48:01 | 0:59:01 | 0:49:00 | ovh | master | centos | 7.4 | kcephfs/mixed-clients/{begin.yaml clusters/1-mds-2-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/kernel_cfuse_workunits_untarbuild_blogbench.yaml} | 4 | |
pass | 3371078 | 2018-12-17 05:23:07 | 2019-01-01 16:39:14 | 2019-01-01 17:25:14 | 0:46:00 | 0:24:24 | 0:21:36 | ovh | master | ubuntu | 16.04 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/forward-scrub.yaml whitelist_health.yaml} | 6 | |
fail | 3371079 | 2018-12-17 05:23:08 | 2019-01-01 16:44:25 | 2019-01-01 18:04:25 | 1:20:00 | 0:57:29 | 0:22:31 | ovh | master | centos | 7.4 | kcephfs/thrash/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} thrashers/default.yaml thrashosds-health.yaml whitelist_health.yaml workloads/kclient_workunit_suites_iozone.yaml} | 3 | |
Failure Reason:
"2019-01-01 17:35:30.155507 mon.a mon.0 158.69.66.156:6789/0 2940 : cluster [WRN] Health check failed: 1 MDSs report slow metadata IOs (MDS_SLOW_METADATA_IO)" in cluster log |
||||||||||||||
pass | 3371080 | 2018-12-17 05:23:09 | 2019-01-01 16:44:25 | 2019-01-01 18:02:25 | 1:18:00 | 0:55:31 | 0:22:29 | ovh | master | ubuntu | 16.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/kclient_workunit_kernel_untar_build.yaml} | 3 | |
fail | 3371081 | 2018-12-17 05:23:09 | 2019-01-01 16:44:29 | 2019-01-01 19:22:30 | 2:38:01 | 0:28:13 | 2:09:48 | ovh | master | centos | 7.4 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/journal-repair.yaml whitelist_health.yaml} | 6 | |
Failure Reason:
Test failure: test_reset (tasks.cephfs.test_journal_repair.TestJournalRepair), test_reset (tasks.cephfs.test_journal_repair.TestJournalRepair) |
||||||||||||||
fail | 3371082 | 2018-12-17 05:23:10 | 2019-01-01 16:44:52 | 2019-01-01 17:42:52 | 0:58:00 | 0:09:17 | 0:48:43 | ovh | master | rhel | 7.5 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/kclient_workunit_misc.yaml} | 3 | |
Failure Reason:
Command failed on ovh052 with status 1: '\n sudo yum -y install bison\n ' |
||||||||||||||
pass | 3371083 | 2018-12-17 05:23:11 | 2019-01-01 16:46:59 | 2019-01-01 18:02:59 | 1:16:00 | 0:27:22 | 0:48:38 | ovh | master | centos | 7.4 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/kclient_workunit_o_trunc.yaml} | 3 | |
pass | 3371084 | 2018-12-17 05:23:12 | 2019-01-01 16:47:19 | 2019-01-01 20:53:23 | 4:06:04 | 0:22:57 | 3:43:07 | ovh | master | ubuntu | 16.04 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/mds-flush.yaml whitelist_health.yaml} | 6 | |
fail | 3371085 | 2018-12-17 05:23:12 | 2019-01-01 16:47:58 | 2019-01-01 17:01:57 | 0:13:59 | ovh | master | ubuntu | 18.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/kclient_workunit_snaps.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh068.front.sepia.ceph.com |
||||||||||||||
fail | 3371086 | 2018-12-17 05:23:13 | 2019-01-01 17:00:19 | 2019-01-01 18:08:19 | 1:08:00 | 0:08:55 | 0:59:05 | ovh | master | rhel | 7.5 | kcephfs/thrash/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} thrashers/mds.yaml thrashosds-health.yaml whitelist_health.yaml workloads/kclient_workunit_suites_ffsb.yaml} | 3 | |
Failure Reason:
Command failed on ovh048 with status 1: '\n sudo yum -y install bison\n ' |
||||||||||||||
pass | 3371087 | 2018-12-17 05:23:14 | 2019-01-01 17:00:19 | 2019-01-01 18:54:20 | 1:54:01 | 0:44:58 | 1:09:03 | ovh | master | ubuntu | 16.04 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/mds-full.yaml whitelist_health.yaml} | 6 | |
pass | 3371088 | 2018-12-17 05:23:15 | 2019-01-01 17:02:09 | 2019-01-01 17:56:09 | 0:54:00 | 0:45:13 | 0:08:47 | ovh | master | ubuntu | 16.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/kclient_workunit_suites_dbench.yaml} | 3 | |
fail | 3371089 | 2018-12-17 05:23:15 | 2019-01-01 17:04:40 | 2019-01-01 18:06:40 | 1:02:00 | 0:08:28 | 0:53:32 | ovh | master | rhel | 7.5 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/kclient_workunit_suites_ffsb.yaml} | 3 | |
Failure Reason:
Command failed on ovh028 with status 1: '\n sudo yum -y install bison\n ' |
||||||||||||||
pass | 3371090 | 2018-12-17 05:23:16 | 2019-01-01 17:21:18 | 2019-01-01 22:05:22 | 4:44:04 | 0:24:26 | 4:19:38 | ovh | master | centos | 7.4 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/pool-perm.yaml whitelist_health.yaml} | 6 | |
fail | 3371091 | 2018-12-17 05:23:17 | 2019-01-01 17:22:12 | 2019-01-01 18:16:12 | 0:54:00 | 0:10:02 | 0:43:58 | ovh | master | rhel | 7.5 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/kclient_workunit_suites_fsstress.yaml} | 3 | |
Failure Reason:
Command failed on ovh076 with status 1: '\n sudo yum -y install bison\n ' |
||||||||||||||
pass | 3371092 | 2018-12-17 05:23:18 | 2019-01-01 17:25:25 | 2019-01-01 19:27:27 | 2:02:02 | 1:43:16 | 0:18:46 | ovh | master | centos | 7.4 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/kclient_workunit_suites_fsx.yaml} | 3 | |
dead | 3371093 | 2018-12-17 05:23:18 | 2019-01-01 17:25:55 | 2019-01-02 05:28:07 | 12:02:12 | ovh | master | ubuntu | 16.04 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/sessionmap.yaml whitelist_health.yaml} | — | |||
fail | 3371094 | 2018-12-17 05:23:19 | 2019-01-01 17:26:54 | 2019-01-01 17:56:54 | 0:30:00 | 0:08:25 | 0:21:35 | ovh | master | centos | 7.4 | kcephfs/thrash/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} thrashers/mon.yaml thrashosds-health.yaml whitelist_health.yaml workloads/kclient_workunit_suites_iozone.yaml} | 3 | |
Failure Reason:
Command failed on ovh056 with status 1: '\n sudo yum -y install ceph-radosgw\n ' |
||||||||||||||
fail | 3371095 | 2018-12-17 05:23:20 | 2019-01-01 17:43:03 | 2019-01-01 18:21:03 | 0:38:00 | ovh | master | ubuntu | 18.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/kclient_workunit_suites_fsync.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh027.front.sepia.ceph.com |
||||||||||||||
dead | 3371096 | 2018-12-17 05:23:20 | 2019-01-01 17:44:26 | 2019-01-02 05:46:37 | 12:02:11 | ovh | master | ubuntu | 18.04 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/strays.yaml whitelist_health.yaml} | — | |||
fail | 3371097 | 2018-12-17 05:23:21 | 2019-01-01 17:55:44 | 2019-01-01 18:15:43 | 0:19:59 | ovh | master | ubuntu | 18.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/kclient_workunit_suites_iozone.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh078.front.sepia.ceph.com |
||||||||||||||
pass | 3371098 | 2018-12-17 05:23:22 | 2019-01-01 17:56:11 | 2019-01-01 18:38:10 | 0:41:59 | 0:18:22 | 0:23:37 | ovh | master | centos | 7.4 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/kclient_workunit_suites_pjd.yaml} | 3 | |
dead | 3371099 | 2018-12-17 05:23:23 | 2019-01-01 17:57:48 | 2019-01-02 05:59:55 | 12:02:07 | ovh | master | centos | 7.4 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/volume-client.yaml whitelist_health.yaml} | — | |||
fail | 3371100 | 2018-12-17 05:23:23 | 2019-01-01 18:02:27 | 2019-01-01 18:26:27 | 0:24:00 | ovh | master | ubuntu | 18.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/kclient_workunit_trivial_sync.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh010.front.sepia.ceph.com |