User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Pass | Fail | Dead |
---|---|---|---|---|---|---|---|---|---|---|---|
teuthology | 2018-11-28 05:20:02 | 2018-12-05 20:29:44 | 2018-12-06 09:23:46 | 12:54:02 | kcephfs | mimic | ovh | 042dc58 | 100 | 137 | 13 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
pass | 3288522 | 2018-11-28 05:20:29 | 2018-12-05 10:23:13 | 2018-12-05 11:17:13 | 0:54:00 | 0:21:34 | 0:32:26 | ovh | master | ubuntu | 16.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/kclient_workunit_direct_io.yaml} | 3 | |
pass | 3288523 | 2018-11-28 05:20:30 | 2018-12-05 10:23:26 | 2018-12-05 12:47:27 | 2:24:01 | 0:46:35 | 1:37:26 | ovh | master | ubuntu | 16.04 | kcephfs/mixed-clients/{begin.yaml clusters/1-mds-2-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/kernel_cfuse_workunits_dbench_iozone.yaml} | 4 | |
pass | 3288524 | 2018-11-28 05:20:31 | 2018-12-05 10:25:43 | 2018-12-05 14:37:47 | 4:12:04 | 0:20:55 | 3:51:09 | ovh | master | ubuntu | 16.04 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/auto-repair.yaml whitelist_health.yaml} | 6 | |
fail | 3288525 | 2018-11-28 05:20:32 | 2018-12-05 10:26:30 | 2018-12-05 11:48:30 | 1:22:00 | 1:00:58 | 0:21:02 | ovh | master | ubuntu | 16.04 | kcephfs/thrash/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} thrashers/default.yaml thrashosds-health.yaml whitelist_health.yaml workloads/kclient_workunit_suites_ffsb.yaml} | 3 | |
Failure Reason:
"2018-12-05 11:02:10.776207 mon.a mon.0 158.69.67.85:6789/0 1445 : cluster [WRN] Health check failed: 1 MDSs report slow metadata IOs (MDS_SLOW_METADATA_IO)" in cluster log |
||||||||||||||
pass | 3288526 | 2018-11-28 05:20:33 | 2018-12-05 10:29:33 | 2018-12-05 12:01:34 | 1:32:01 | 1:10:13 | 0:21:48 | ovh | master | ubuntu | 16.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/kclient_workunit_kernel_untar_build.yaml} | 3 | |
fail | 3288527 | 2018-11-28 05:20:33 | 2018-12-05 10:30:58 | 2018-12-05 15:21:02 | 4:50:04 | 0:09:28 | 4:40:36 | ovh | master | rhel | 7.5 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/backtrace.yaml whitelist_health.yaml} | 6 | |
Failure Reason:
Command failed on ovh016 with status 1: '\n sudo yum -y install bison\n ' |
||||||||||||||
fail | 3288528 | 2018-11-28 05:20:34 | 2018-12-05 10:35:21 | 2018-12-05 11:13:21 | 0:38:00 | ovh | master | ubuntu | 18.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/kclient_workunit_misc.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh014.front.sepia.ceph.com |
||||||||||||||
pass | 3288529 | 2018-11-28 05:20:35 | 2018-12-05 10:35:26 | 2018-12-05 11:19:26 | 0:44:00 | 0:25:35 | 0:18:25 | ovh | master | ubuntu | 16.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/kclient_workunit_o_trunc.yaml} | 3 | |
pass | 3288530 | 2018-11-28 05:20:36 | 2018-12-05 10:39:17 | 2018-12-05 14:17:19 | 3:38:02 | 0:27:44 | 3:10:18 | ovh | master | ubuntu | 16.04 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/client-limits.yaml whitelist_health.yaml} | 6 | |
pass | 3288531 | 2018-11-28 05:20:37 | 2018-12-05 10:41:10 | 2018-12-05 12:35:11 | 1:54:01 | 1:39:06 | 0:14:55 | ovh | master | ubuntu | 16.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/kclient_workunit_snaps.yaml} | 3 | |
pass | 3288532 | 2018-11-28 05:20:37 | 2018-12-05 10:46:54 | 2018-12-05 11:34:54 | 0:48:00 | 0:33:33 | 0:14:27 | ovh | master | ubuntu | 16.04 | kcephfs/thrash/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} thrashers/mds.yaml thrashosds-health.yaml whitelist_health.yaml workloads/kclient_workunit_suites_iozone.yaml} | 3 | |
dead | 3288533 | 2018-11-28 05:20:38 | 2018-12-05 10:46:54 | 2018-12-05 22:59:07 | 12:12:13 | ovh | master | centos | 7.4 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/client-recovery.yaml whitelist_health.yaml} | 6 | |||
pass | 3288534 | 2018-11-28 05:20:39 | 2018-12-05 10:46:59 | 2018-12-05 11:46:59 | 1:00:00 | 0:45:20 | 0:14:40 | ovh | master | ubuntu | 16.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/kclient_workunit_suites_dbench.yaml} | 3 | |
fail | 3288535 | 2018-11-28 05:20:40 | 2018-12-05 10:51:28 | 2018-12-05 11:21:28 | 0:30:00 | ovh | master | ubuntu | 18.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/kclient_workunit_suites_ffsb.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh056.front.sepia.ceph.com |
||||||||||||||
pass | 3288536 | 2018-11-28 05:20:40 | 2018-12-05 10:53:28 | 2018-12-05 16:47:33 | 5:54:05 | 0:19:33 | 5:34:32 | ovh | master | centos | 7.4 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/config-commands.yaml whitelist_health.yaml} | 6 | |
pass | 3288537 | 2018-11-28 05:20:41 | 2018-12-05 10:53:45 | 2018-12-05 11:35:45 | 0:42:00 | 0:32:15 | 0:09:45 | ovh | master | ubuntu | 16.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/kclient_workunit_suites_fsstress.yaml} | 3 | |
pass | 3288538 | 2018-11-28 05:20:42 | 2018-12-05 10:55:04 | 2018-12-05 12:13:04 | 1:18:00 | 0:47:45 | 0:30:15 | ovh | master | ubuntu | 16.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/kclient_workunit_suites_fsx.yaml} | 3 | |
fail | 3288539 | 2018-11-28 05:20:43 | 2018-12-05 10:59:07 | 2018-12-05 13:49:09 | 2:50:02 | ovh | master | ubuntu | 18.04 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/damage.yaml whitelist_health.yaml} | 6 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh065.front.sepia.ceph.com |
||||||||||||||
fail | 3288540 | 2018-11-28 05:20:43 | 2018-12-05 11:03:29 | 2018-12-05 11:23:28 | 0:19:59 | ovh | master | ubuntu | 18.04 | kcephfs/thrash/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} thrashers/mon.yaml thrashosds-health.yaml whitelist_health.yaml workloads/kclient_workunit_suites_ffsb.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh049.front.sepia.ceph.com |
||||||||||||||
pass | 3288541 | 2018-11-28 05:20:44 | 2018-12-05 11:06:39 | 2018-12-05 11:38:38 | 0:31:59 | 0:16:04 | 0:15:55 | ovh | master | ubuntu | 16.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/kclient_workunit_suites_fsync.yaml} | 3 | |
pass | 3288542 | 2018-11-28 05:20:45 | 2018-12-05 11:11:02 | 2018-12-05 18:47:09 | 7:36:07 | 0:35:56 | 7:00:11 | ovh | master | ubuntu | 16.04 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/data-scan.yaml whitelist_health.yaml} | 6 | |
fail | 3288543 | 2018-11-28 05:20:46 | 2018-12-05 11:13:33 | 2018-12-05 12:35:33 | 1:22:00 | 0:52:51 | 0:29:09 | ovh | master | ubuntu | 16.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/kclient_workunit_suites_iozone.yaml} | 3 | |
Failure Reason:
"2018-12-05 12:03:13.992022 mon.b mon.0 158.69.64.79:6789/0 146 : cluster [WRN] Health check failed: Degraded data redundancy: 1294/10286 objects degraded (12.580%), 5 pgs degraded (PG_DEGRADED)" in cluster log |
||||||||||||||
pass | 3288544 | 2018-11-28 05:20:46 | 2018-12-05 11:13:45 | 2018-12-05 11:55:45 | 0:42:00 | 0:17:36 | 0:24:24 | ovh | master | centos | 7.4 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/kclient_workunit_suites_pjd.yaml} | 3 | |
pass | 3288545 | 2018-11-28 05:20:47 | 2018-12-05 11:17:25 | 2018-12-05 14:39:27 | 3:22:02 | 0:34:55 | 2:47:07 | ovh | master | ubuntu | 16.04 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/failover.yaml whitelist_health.yaml} | 6 | |
pass | 3288546 | 2018-11-28 05:20:48 | 2018-12-05 11:19:38 | 2018-12-05 12:03:38 | 0:44:00 | 0:15:33 | 0:28:27 | ovh | master | ubuntu | 16.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/kclient_workunit_trivial_sync.yaml} | 3 | |
pass | 3288547 | 2018-11-28 05:20:49 | 2018-12-05 11:21:39 | 2018-12-05 12:07:39 | 0:46:00 | 0:16:52 | 0:29:08 | ovh | master | centos | 7.4 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/kclient_workunit_direct_io.yaml} | 3 | |
pass | 3288548 | 2018-11-28 05:20:49 | 2018-12-05 11:21:40 | 2018-12-05 12:45:40 | 1:24:00 | 0:53:17 | 0:30:43 | ovh | master | centos | 7.4 | kcephfs/mixed-clients/{begin.yaml clusters/1-mds-2-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/kernel_cfuse_workunits_untarbuild_blogbench.yaml} | 4 | |
fail | 3288549 | 2018-11-28 05:20:50 | 2018-12-05 11:23:40 | 2018-12-05 13:45:42 | 2:22:02 | ovh | master | ubuntu | 18.04 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/forward-scrub.yaml whitelist_health.yaml} | 6 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh096.front.sepia.ceph.com |
||||||||||||||
fail | 3288550 | 2018-11-28 05:20:51 | 2018-12-05 11:35:05 | 2018-12-05 12:03:05 | 0:28:00 | ovh | master | ubuntu | 18.04 | kcephfs/thrash/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} thrashers/default.yaml thrashosds-health.yaml whitelist_health.yaml workloads/kclient_workunit_suites_iozone.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh090.front.sepia.ceph.com |
||||||||||||||
fail | 3288551 | 2018-11-28 05:20:52 | 2018-12-05 11:35:46 | 2018-12-05 11:49:45 | 0:13:59 | ovh | master | ubuntu | 18.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/kclient_workunit_kernel_untar_build.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh051.front.sepia.ceph.com |
||||||||||||||
dead | 3288552 | 2018-11-28 05:20:53 | 2018-12-05 11:38:39 | 2018-12-05 23:45:56 | 12:07:17 | ovh | master | rhel | 7.5 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/journal-repair.yaml whitelist_health.yaml} | 6 | |||
fail | 3288553 | 2018-11-28 05:20:54 | 2018-12-05 11:38:39 | 2018-12-05 11:56:39 | 0:18:00 | ovh | master | ubuntu | 18.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/kclient_workunit_misc.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh098.front.sepia.ceph.com |
||||||||||||||
fail | 3288554 | 2018-11-28 05:20:55 | 2018-12-05 11:45:04 | 2018-12-05 12:03:04 | 0:18:00 | ovh | master | ubuntu | 18.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/kclient_workunit_o_trunc.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh050.front.sepia.ceph.com |
||||||||||||||
fail | 3288555 | 2018-11-28 05:20:56 | 2018-12-05 11:47:11 | 2018-12-05 15:43:14 | 3:56:03 | 0:09:22 | 3:46:41 | ovh | master | rhel | 7.5 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/mds-flush.yaml whitelist_health.yaml} | 6 | |
Failure Reason:
Command failed on ovh045 with status 1: '\n sudo yum -y install bison\n ' |
||||||||||||||
fail | 3288556 | 2018-11-28 05:20:57 | 2018-12-05 11:48:22 | 2018-12-05 13:48:23 | 2:00:01 | 1:37:19 | 0:22:42 | ovh | master | ubuntu | 16.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/kclient_workunit_snaps.yaml} | 3 | |
Failure Reason:
"2018-12-05 12:15:17.574392 mon.b mon.0 158.69.64.195:6789/0 202 : cluster [WRN] Health check failed: 2 slow ops, oldest one blocked for 124 sec, mon.c has slow ops (SLOW_OPS)" in cluster log |
||||||||||||||
fail | 3288557 | 2018-11-28 05:20:58 | 2018-12-05 11:48:31 | 2018-12-05 12:40:31 | 0:52:00 | 0:32:51 | 0:19:09 | ovh | master | ubuntu | 16.04 | kcephfs/thrash/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} thrashers/mds.yaml thrashosds-health.yaml whitelist_health.yaml workloads/kclient_workunit_suites_ffsb.yaml} | 3 | |
Failure Reason:
Command failed (workunit test suites/ffsb.sh) on ovh068 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=042dc584a80f9c5a44b3d6406d75b7aee6b7ac8c TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/ffsb.sh' |
||||||||||||||
fail | 3288558 | 2018-11-28 05:20:59 | 2018-12-05 11:49:57 | 2018-12-05 18:46:02 | 6:56:05 | ovh | master | ubuntu | 18.04 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/mds-full.yaml whitelist_health.yaml} | 6 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh058.front.sepia.ceph.com |
||||||||||||||
fail | 3288559 | 2018-11-28 05:20:59 | 2018-12-05 11:51:54 | 2018-12-05 12:57:54 | 1:06:00 | 0:08:35 | 0:57:25 | ovh | master | rhel | 7.5 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/kclient_workunit_suites_dbench.yaml} | 3 | |
Failure Reason:
Command failed on ovh057 with status 1: '\n sudo yum -y install bison\n ' |
||||||||||||||
pass | 3288560 | 2018-11-28 05:21:00 | 2018-12-05 11:53:14 | 2018-12-05 13:15:14 | 1:22:00 | 0:42:42 | 0:39:18 | ovh | master | ubuntu | 16.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/kclient_workunit_suites_ffsb.yaml} | 3 | |
fail | 3288561 | 2018-11-28 05:21:01 | 2018-12-05 11:55:56 | 2018-12-05 21:10:04 | 9:14:08 | 0:09:22 | 9:04:46 | ovh | master | rhel | 7.5 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/pool-perm.yaml whitelist_health.yaml} | 6 | |
Failure Reason:
Command failed on ovh030 with status 1: '\n sudo yum -y install bison\n ' |
||||||||||||||
pass | 3288562 | 2018-11-28 05:21:02 | 2018-12-05 11:56:40 | 2018-12-05 12:38:40 | 0:42:00 | 0:27:27 | 0:14:33 | ovh | master | ubuntu | 16.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/kclient_workunit_suites_fsstress.yaml} | 3 | |
pass | 3288563 | 2018-11-28 05:21:03 | 2018-12-05 11:57:13 | 2018-12-05 13:03:13 | 1:06:00 | 0:47:24 | 0:18:36 | ovh | master | centos | 7.4 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/kclient_workunit_suites_fsx.yaml} | 3 | |
fail | 3288564 | 2018-11-28 05:21:04 | 2018-12-05 12:01:47 | 2018-12-05 19:13:53 | 7:12:06 | 0:09:23 | 7:02:43 | ovh | master | rhel | 7.5 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/sessionmap.yaml whitelist_health.yaml} | 6 | |
Failure Reason:
Command failed on ovh053 with status 1: '\n sudo yum -y install bison\n ' |
||||||||||||||
fail | 3288565 | 2018-11-28 05:21:05 | 2018-12-05 12:03:15 | 2018-12-05 12:31:15 | 0:28:00 | ovh | master | ubuntu | 18.04 | kcephfs/thrash/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} thrashers/mon.yaml thrashosds-health.yaml whitelist_health.yaml workloads/kclient_workunit_suites_iozone.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh014.front.sepia.ceph.com |
||||||||||||||
fail | 3288566 | 2018-11-28 05:21:06 | 2018-12-05 12:03:16 | 2018-12-05 12:25:15 | 0:21:59 | ovh | master | ubuntu | 18.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/kclient_workunit_suites_fsync.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh056.front.sepia.ceph.com |
||||||||||||||
pass | 3288567 | 2018-11-28 05:21:07 | 2018-12-05 12:03:15 | 2018-12-05 17:37:20 | 5:34:05 | 0:43:55 | 4:50:10 | ovh | master | ubuntu | 16.04 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/strays.yaml whitelist_health.yaml} | 6 | |
fail | 3288568 | 2018-11-28 05:21:07 | 2018-12-05 12:03:39 | 2018-12-05 12:39:38 | 0:35:59 | ovh | master | ubuntu | 18.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/kclient_workunit_suites_iozone.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh080.front.sepia.ceph.com |
||||||||||||||
fail | 3288569 | 2018-11-28 05:21:08 | 2018-12-05 12:04:11 | 2018-12-05 12:18:11 | 0:14:00 | ovh | master | ubuntu | 18.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/kclient_workunit_suites_pjd.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh093.front.sepia.ceph.com |
||||||||||||||
fail | 3288570 | 2018-11-28 05:21:09 | 2018-12-05 12:07:51 | 2018-12-05 13:51:52 | 1:44:01 | ovh | master | ubuntu | 18.04 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/volume-client.yaml whitelist_health.yaml} | 6 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh049.front.sepia.ceph.com |
||||||||||||||
fail | 3288571 | 2018-11-28 05:21:10 | 2018-12-05 12:08:38 | 2018-12-05 13:18:38 | 1:10:00 | 0:07:35 | 1:02:25 | ovh | master | rhel | 7.5 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/kclient_workunit_trivial_sync.yaml} | 3 | |
Failure Reason:
Command failed on ovh034 with status 1: '\n sudo yum -y install bison\n ' |
||||||||||||||
pass | 3288572 | 2018-11-28 05:21:10 | 2018-12-05 12:13:16 | 2018-12-05 12:39:15 | 0:25:59 | 0:16:57 | 0:09:02 | ovh | master | ubuntu | 16.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/kclient_workunit_direct_io.yaml} | 3 | |
fail | 3288573 | 2018-11-28 05:21:11 | 2018-12-05 12:17:08 | 2018-12-05 12:55:08 | 0:38:00 | ovh | master | ubuntu | 18.04 | kcephfs/mixed-clients/{begin.yaml clusters/1-mds-2-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/kernel_cfuse_workunits_dbench_iozone.yaml} | 4 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh016.front.sepia.ceph.com |
||||||||||||||
fail | 3288574 | 2018-11-28 05:21:12 | 2018-12-05 12:18:23 | 2018-12-05 17:18:27 | 5:00:04 | 0:09:43 | 4:50:21 | ovh | master | rhel | 7.5 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/auto-repair.yaml whitelist_health.yaml} | 6 | |
Failure Reason:
Command failed on ovh039 with status 1: '\n sudo yum -y install bison\n ' |
||||||||||||||
fail | 3288575 | 2018-11-28 05:21:13 | 2018-12-05 12:25:27 | 2018-12-05 13:27:27 | 1:02:00 | 0:49:26 | 0:12:34 | ovh | master | centos | 7.4 | kcephfs/thrash/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} thrashers/default.yaml thrashosds-health.yaml whitelist_health.yaml workloads/kclient_workunit_suites_ffsb.yaml} | 3 | |
Failure Reason:
"2018-12-05 13:01:15.239395 mon.b mon.0 158.69.67.196:6789/0 1464 : cluster [WRN] Health check failed: 1 MDSs report slow metadata IOs (MDS_SLOW_METADATA_IO)" in cluster log |
||||||||||||||
pass | 3288576 | 2018-11-28 05:21:14 | 2018-12-05 12:26:53 | 2018-12-05 14:02:54 | 1:36:01 | 1:00:57 | 0:35:04 | ovh | master | centos | 7.4 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/kclient_workunit_kernel_untar_build.yaml} | 3 | |
pass | 3288577 | 2018-11-28 05:21:15 | 2018-12-05 12:28:31 | 2018-12-05 16:08:33 | 3:40:02 | 0:21:16 | 3:18:46 | ovh | master | centos | 7.4 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/backtrace.yaml whitelist_health.yaml} | 6 | |
pass | 3288578 | 2018-11-28 05:21:15 | 2018-12-05 12:31:27 | 2018-12-05 13:33:27 | 1:02:00 | 0:52:41 | 0:09:19 | ovh | master | ubuntu | 16.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/kclient_workunit_misc.yaml} | 3 | |
pass | 3288579 | 2018-11-28 05:21:16 | 2018-12-05 12:35:23 | 2018-12-05 13:27:23 | 0:52:00 | 0:26:46 | 0:25:14 | ovh | master | ubuntu | 16.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/kclient_workunit_o_trunc.yaml} | 3 | |
pass | 3288580 | 2018-11-28 05:21:17 | 2018-12-05 12:35:34 | 2018-12-05 14:05:35 | 1:30:01 | 0:30:04 | 0:59:57 | ovh | master | centos | 7.4 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/client-limits.yaml whitelist_health.yaml} | 6 | |
fail | 3288581 | 2018-11-28 05:21:18 | 2018-12-05 12:38:52 | 2018-12-05 12:58:51 | 0:19:59 | ovh | master | ubuntu | 18.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/kclient_workunit_snaps.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh083.front.sepia.ceph.com |
||||||||||||||
fail | 3288582 | 2018-11-28 05:21:19 | 2018-12-05 12:39:16 | 2018-12-05 16:23:19 | 3:44:03 | 3:17:26 | 0:26:37 | ovh | master | centos | 7.4 | kcephfs/thrash/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} thrashers/mds.yaml thrashosds-health.yaml whitelist_health.yaml workloads/kclient_workunit_suites_iozone.yaml} | 3 | |
Failure Reason:
Command failed (workunit test suites/iozone.sh) on ovh030 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=042dc584a80f9c5a44b3d6406d75b7aee6b7ac8c TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/iozone.sh' |
||||||||||||||
fail | 3288583 | 2018-11-28 05:21:20 | 2018-12-05 12:39:39 | 2018-12-05 18:01:44 | 5:22:05 | ovh | master | ubuntu | 18.04 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/client-recovery.yaml whitelist_health.yaml} | 6 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh081.front.sepia.ceph.com |
||||||||||||||
fail | 3288584 | 2018-11-28 05:21:21 | 2018-12-05 12:40:43 | 2018-12-05 13:10:43 | 0:30:00 | ovh | master | ubuntu | 18.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/kclient_workunit_suites_dbench.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh039.front.sepia.ceph.com |
||||||||||||||
fail | 3288585 | 2018-11-28 05:21:22 | 2018-12-05 12:45:23 | 2018-12-05 14:19:24 | 1:34:01 | 1:09:52 | 0:24:09 | ovh | master | ubuntu | 16.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/kclient_workunit_suites_ffsb.yaml} | 3 | |
Failure Reason:
"2018-12-05 13:18:23.128558 mon.a mon.0 158.69.68.97:6789/0 159 : cluster [WRN] Health check failed: 1 MDSs report slow metadata IOs (MDS_SLOW_METADATA_IO)" in cluster log |
||||||||||||||
pass | 3288586 | 2018-11-28 05:21:22 | 2018-12-05 12:45:41 | 2018-12-05 20:05:47 | 7:20:06 | 0:21:36 | 6:58:30 | ovh | master | centos | 7.4 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/config-commands.yaml whitelist_health.yaml} | 6 | |
pass | 3288587 | 2018-11-28 05:21:23 | 2018-12-05 12:47:39 | 2018-12-05 13:21:39 | 0:34:00 | 0:25:29 | 0:08:31 | ovh | master | ubuntu | 16.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/kclient_workunit_suites_fsstress.yaml} | 3 | |
pass | 3288588 | 2018-11-28 05:21:24 | 2018-12-05 12:48:35 | 2018-12-05 13:38:35 | 0:50:00 | 0:40:39 | 0:09:21 | ovh | master | ubuntu | 16.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/kclient_workunit_suites_fsx.yaml} | 3 | |
fail | 3288589 | 2018-11-28 05:21:25 | 2018-12-05 12:53:01 | 2018-12-05 21:33:08 | 8:40:07 | ovh | master | ubuntu | 18.04 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/damage.yaml whitelist_health.yaml} | 6 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh027.front.sepia.ceph.com |
||||||||||||||
fail | 3288590 | 2018-11-28 05:21:26 | 2018-12-05 12:55:20 | 2018-12-05 14:03:20 | 1:08:00 | 0:49:10 | 0:18:50 | ovh | master | centos | 7.4 | kcephfs/thrash/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} thrashers/mon.yaml thrashosds-health.yaml whitelist_health.yaml workloads/kclient_workunit_suites_ffsb.yaml} | 3 | |
Failure Reason:
"2018-12-05 13:38:00.503682 mon.a mon.0 158.69.66.6:6789/0 125 : cluster [WRN] Health check failed: 1 MDSs report slow metadata IOs (MDS_SLOW_METADATA_IO)" in cluster log |
||||||||||||||
pass | 3288591 | 2018-11-28 05:21:26 | 2018-12-05 12:58:06 | 2018-12-05 13:54:06 | 0:56:00 | 0:21:36 | 0:34:24 | ovh | master | centos | 7.4 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/kclient_workunit_suites_fsync.yaml} | 3 | |
fail | 3288592 | 2018-11-28 05:21:27 | 2018-12-05 12:58:52 | 2018-12-05 16:24:55 | 3:26:03 | ovh | master | ubuntu | 18.04 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/data-scan.yaml whitelist_health.yaml} | 6 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh020.front.sepia.ceph.com |
||||||||||||||
fail | 3288593 | 2018-11-28 05:21:28 | 2018-12-05 13:03:26 | 2018-12-05 14:11:26 | 1:08:00 | 0:39:43 | 0:28:17 | ovh | master | centos | 7.4 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/kclient_workunit_suites_iozone.yaml} | 3 | |
Failure Reason:
"2018-12-05 13:48:57.012129 mon.a mon.0 158.69.67.111:6789/0 253 : cluster [WRN] Health check failed: 1 MDSs report slow metadata IOs (MDS_SLOW_METADATA_IO)" in cluster log |
||||||||||||||
fail | 3288594 | 2018-11-28 05:21:29 | 2018-12-05 13:11:36 | 2018-12-05 13:27:30 | 0:15:54 | ovh | master | ubuntu | 18.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/kclient_workunit_suites_pjd.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh069.front.sepia.ceph.com |
||||||||||||||
fail | 3288595 | 2018-11-28 05:21:29 | 2018-12-05 13:15:56 | 2018-12-05 14:31:55 | 1:15:59 | ovh | master | ubuntu | 18.04 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/failover.yaml whitelist_health.yaml} | 6 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh089.front.sepia.ceph.com |
||||||||||||||
fail | 3288596 | 2018-11-28 05:21:30 | 2018-12-05 13:16:20 | 2018-12-05 14:18:20 | 1:02:00 | 0:07:58 | 0:54:02 | ovh | master | rhel | 7.5 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/kclient_workunit_trivial_sync.yaml} | 3 | |
Failure Reason:
Command failed on ovh084 with status 1: '\n sudo yum -y install bison\n ' |
||||||||||||||
pass | 3288597 | 2018-11-28 05:21:31 | 2018-12-05 13:18:51 | 2018-12-05 13:44:51 | 0:26:00 | 0:17:22 | 0:08:38 | ovh | master | ubuntu | 16.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/kclient_workunit_direct_io.yaml} | 3 | |
fail | 3288598 | 2018-11-28 05:21:32 | 2018-12-05 13:20:21 | 2018-12-05 14:58:22 | 1:38:01 | 0:08:57 | 1:29:04 | ovh | master | rhel | 7.5 | kcephfs/mixed-clients/{begin.yaml clusters/1-mds-2-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/kernel_cfuse_workunits_untarbuild_blogbench.yaml} | 4 | |
Failure Reason:
Command failed on ovh002 with status 1: '\n sudo yum -y install bison\n ' |
||||||||||||||
fail | 3288599 | 2018-11-28 05:21:33 | 2018-12-05 13:20:49 | 2018-12-05 18:30:53 | 5:10:04 | 0:09:42 | 5:00:22 | ovh | master | rhel | 7.5 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/forward-scrub.yaml whitelist_health.yaml} | 6 | |
Failure Reason:
Command failed on ovh010 with status 1: '\n sudo yum -y install bison\n ' |
||||||||||||||
fail | 3288600 | 2018-11-28 05:21:34 | 2018-12-05 13:21:52 | 2018-12-05 13:41:51 | 0:19:59 | ovh | master | ubuntu | 18.04 | kcephfs/thrash/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} thrashers/default.yaml thrashosds-health.yaml whitelist_health.yaml workloads/kclient_workunit_suites_iozone.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh074.front.sepia.ceph.com |
||||||||||||||
fail | 3288601 | 2018-11-28 05:21:35 | 2018-12-05 13:27:36 | 2018-12-05 14:27:36 | 1:00:00 | 0:07:20 | 0:52:40 | ovh | master | rhel | 7.5 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/kclient_workunit_kernel_untar_build.yaml} | 3 | |
Failure Reason:
Command failed on ovh086 with status 1: '\n sudo yum -y install bison\n ' |
||||||||||||||
fail | 3288602 | 2018-11-28 05:21:36 | 2018-12-05 13:27:36 | 2018-12-05 18:57:40 | 5:30:04 | 0:10:14 | 5:19:50 | ovh | master | rhel | 7.5 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/journal-repair.yaml whitelist_health.yaml} | 6 | |
Failure Reason:
Command failed on ovh060 with status 1: '\n sudo yum -y install bison\n ' |
||||||||||||||
fail | 3288603 | 2018-11-28 05:21:37 | 2018-12-05 13:27:36 | 2018-12-05 14:03:35 | 0:35:59 | ovh | master | ubuntu | 18.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/kclient_workunit_misc.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh087.front.sepia.ceph.com |
||||||||||||||
pass | 3288604 | 2018-11-28 05:21:37 | 2018-12-05 13:30:58 | 2018-12-05 14:22:58 | 0:52:00 | 0:26:43 | 0:25:17 | ovh | master | centos | 7.4 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/kclient_workunit_o_trunc.yaml} | 3 | |
pass | 3288605 | 2018-11-28 05:21:38 | 2018-12-05 13:33:39 | 2018-12-05 15:43:40 | 2:10:01 | 0:21:34 | 1:48:27 | ovh | master | ubuntu | 16.04 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/mds-flush.yaml whitelist_health.yaml} | 6 | |
fail | 3288606 | 2018-11-28 05:21:39 | 2018-12-05 13:38:47 | 2018-12-05 15:22:48 | 1:44:01 | 1:16:02 | 0:27:59 | ovh | master | centos | 7.4 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/kclient_workunit_snaps.yaml} | 3 | |
Failure Reason:
"2018-12-05 14:59:22.060243 mon.b mon.0 158.69.65.191:6789/0 502 : cluster [ERR] Health check failed: mon a is very low on available space (MON_DISK_CRIT)" in cluster log |
||||||||||||||
fail | 3288607 | 2018-11-28 05:21:40 | 2018-12-05 13:42:03 | 2018-12-05 14:40:03 | 0:58:00 | 0:36:16 | 0:21:44 | ovh | master | centos | 7.4 | kcephfs/thrash/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} thrashers/mds.yaml thrashosds-health.yaml whitelist_health.yaml workloads/kclient_workunit_suites_ffsb.yaml} | 3 | |
Failure Reason:
Command failed (workunit test suites/ffsb.sh) on ovh074 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=042dc584a80f9c5a44b3d6406d75b7aee6b7ac8c TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/ffsb.sh' |
||||||||||||||
fail | 3288608 | 2018-11-28 05:21:41 | 2018-12-05 13:42:24 | 2018-12-05 16:08:25 | 2:26:01 | 0:09:41 | 2:16:20 | ovh | master | rhel | 7.5 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/mds-full.yaml whitelist_health.yaml} | 6 | |
Failure Reason:
Command failed on ovh078 with status 1: '\n sudo yum -y install bison\n ' |
||||||||||||||
pass | 3288609 | 2018-11-28 05:21:41 | 2018-12-05 13:45:03 | 2018-12-05 14:53:03 | 1:08:00 | 0:43:01 | 0:24:59 | ovh | master | ubuntu | 16.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/kclient_workunit_suites_dbench.yaml} | 3 | |
fail | 3288610 | 2018-11-28 05:21:42 | 2018-12-05 13:45:43 | 2018-12-05 14:53:43 | 1:08:00 | 0:07:41 | 1:00:19 | ovh | master | rhel | 7.5 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/kclient_workunit_suites_ffsb.yaml} | 3 | |
Failure Reason:
Command failed on ovh015 with status 1: '\n sudo yum -y install bison\n ' |
||||||||||||||
fail | 3288611 | 2018-11-28 05:21:43 | 2018-12-05 13:48:35 | 2018-12-05 20:14:41 | 6:26:06 | 0:09:21 | 6:16:45 | ovh | master | rhel | 7.5 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/pool-perm.yaml whitelist_health.yaml} | 6 | |
Failure Reason:
Command failed on ovh001 with status 1: '\n sudo yum -y install bison\n ' |
||||||||||||||
fail | 3288612 | 2018-11-28 05:21:44 | 2018-12-05 13:49:10 | 2018-12-05 14:47:10 | 0:58:00 | 0:07:49 | 0:50:11 | ovh | master | rhel | 7.5 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/kclient_workunit_suites_fsstress.yaml} | 3 | |
Failure Reason:
Command failed on ovh065 with status 1: '\n sudo yum -y install bison\n ' |
||||||||||||||
pass | 3288613 | 2018-11-28 05:21:45 | 2018-12-05 13:52:04 | 2018-12-05 15:38:05 | 1:46:01 | 1:18:56 | 0:27:05 | ovh | master | centos | 7.4 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/kclient_workunit_suites_fsx.yaml} | 3 | |
fail | 3288614 | 2018-11-28 05:21:45 | 2018-12-05 13:54:18 | 2018-12-05 15:46:19 | 1:52:01 | ovh | master | ubuntu | 18.04 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/sessionmap.yaml whitelist_health.yaml} | 6 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh029.front.sepia.ceph.com |
||||||||||||||
fail | 3288615 | 2018-11-28 05:21:46 | 2018-12-05 14:03:06 | 2018-12-05 15:07:06 | 1:04:00 | 0:07:46 | 0:56:14 | ovh | master | rhel | 7.5 | kcephfs/thrash/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} thrashers/mon.yaml thrashosds-health.yaml whitelist_health.yaml workloads/kclient_workunit_suites_iozone.yaml} | 3 | |
Failure Reason:
Command failed on ovh027 with status 1: '\n sudo yum -y install bison\n ' |
||||||||||||||
fail | 3288616 | 2018-11-28 05:21:47 | 2018-12-05 14:03:22 | 2018-12-05 15:15:22 | 1:12:00 | 0:08:19 | 1:03:41 | ovh | master | rhel | 7.5 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/kclient_workunit_suites_fsync.yaml} | 3 | |
Failure Reason:
Command failed on ovh087 with status 1: '\n sudo yum -y install bison\n ' |
||||||||||||||
pass | 3288617 | 2018-11-28 05:21:48 | 2018-12-05 14:03:36 | 2018-12-05 15:25:37 | 1:22:01 | 0:43:42 | 0:38:19 | ovh | master | ubuntu | 16.04 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/strays.yaml whitelist_health.yaml} | 6 | |
fail | 3288618 | 2018-11-28 05:21:49 | 2018-12-05 14:05:37 | 2018-12-05 15:03:37 | 0:58:00 | 0:07:39 | 0:50:21 | ovh | master | rhel | 7.5 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/kclient_workunit_suites_iozone.yaml} | 3 | |
Failure Reason:
Command failed on ovh029 with status 1: '\n sudo yum -y install bison\n ' |
||||||||||||||
pass | 3288619 | 2018-11-28 05:21:50 | 2018-12-05 14:09:11 | 2018-12-05 14:49:10 | 0:39:59 | 0:16:45 | 0:23:14 | ovh | master | ubuntu | 16.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/kclient_workunit_suites_pjd.yaml} | 3 | |
pass | 3288620 | 2018-11-28 05:21:51 | 2018-12-05 14:11:28 | 2018-12-05 20:29:33 | 6:18:05 | 0:46:54 | 5:31:11 | ovh | master | centos | 7.4 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/volume-client.yaml whitelist_health.yaml} | 6 | |
fail | 3288621 | 2018-11-28 05:21:51 | 2018-12-05 14:17:23 | 2018-12-05 15:11:23 | 0:54:00 | 0:07:36 | 0:46:24 | ovh | master | rhel | 7.5 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/kclient_workunit_trivial_sync.yaml} | 3 | |
Failure Reason:
Command failed on ovh063 with status 1: '\n sudo yum -y install bison\n ' |
||||||||||||||
fail | 3288622 | 2018-11-28 05:21:52 | 2018-12-05 14:18:21 | 2018-12-05 15:12:21 | 0:54:00 | 0:07:42 | 0:46:18 | ovh | master | rhel | 7.5 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/kclient_workunit_direct_io.yaml} | 3 | |
Failure Reason:
Command failed on ovh084 with status 1: '\n sudo yum -y install bison\n ' |
||||||||||||||
fail | 3288623 | 2018-11-28 05:21:53 | 2018-12-05 14:19:26 | 2018-12-05 15:37:26 | 1:18:00 | 0:07:51 | 1:10:09 | ovh | master | rhel | 7.5 | kcephfs/mixed-clients/{begin.yaml clusters/1-mds-2-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/kernel_cfuse_workunits_dbench_iozone.yaml} | 4 | |
Failure Reason:
Command failed on ovh069 with status 1: '\n sudo yum -y install bison\n ' |
||||||||||||||
pass | 3288624 | 2018-11-28 05:21:54 | 2018-12-05 14:23:11 | 2018-12-05 16:31:12 | 2:08:01 | 0:22:22 | 1:45:39 | ovh | master | centos | 7.4 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/auto-repair.yaml whitelist_health.yaml} | 6 | |
fail | 3288625 | 2018-11-28 05:21:54 | 2018-12-05 14:27:37 | 2018-12-05 14:53:37 | 0:26:00 | ovh | master | ubuntu | 18.04 | kcephfs/thrash/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} thrashers/default.yaml thrashosds-health.yaml whitelist_health.yaml workloads/kclient_workunit_suites_ffsb.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh057.front.sepia.ceph.com |
||||||||||||||
fail | 3288626 | 2018-11-28 05:21:55 | 2018-12-05 14:31:57 | 2018-12-05 14:45:57 | 0:14:00 | ovh | master | ubuntu | 18.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/kclient_workunit_kernel_untar_build.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh031.front.sepia.ceph.com |
||||||||||||||
fail | 3288627 | 2018-11-28 05:21:56 | 2018-12-05 14:38:00 | 2018-12-05 18:08:02 | 3:30:02 | 0:11:00 | 3:19:02 | ovh | master | rhel | 7.5 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/backtrace.yaml whitelist_health.yaml} | 6 | |
Failure Reason:
Command failed on ovh012 with status 1: '\n sudo yum -y install bison\n ' |
||||||||||||||
fail | 3288628 | 2018-11-28 05:21:57 | 2018-12-05 14:39:34 | 2018-12-05 15:03:33 | 0:23:59 | ovh | master | ubuntu | 18.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/kclient_workunit_misc.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh049.front.sepia.ceph.com |
||||||||||||||
fail | 3288629 | 2018-11-28 05:21:57 | 2018-12-05 14:40:04 | 2018-12-05 15:02:04 | 0:22:00 | ovh | master | ubuntu | 18.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/kclient_workunit_o_trunc.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh074.front.sepia.ceph.com |
||||||||||||||
pass | 3288630 | 2018-11-28 05:21:58 | 2018-12-05 14:46:10 | 2018-12-05 17:34:11 | 2:48:01 | 0:28:40 | 2:19:21 | ovh | master | centos | 7.4 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/client-limits.yaml whitelist_health.yaml} | 6 | |
fail | 3288631 | 2018-11-28 05:21:59 | 2018-12-05 14:47:23 | 2018-12-05 16:25:23 | 1:38:00 | 1:23:46 | 0:14:14 | ovh | master | centos | 7.4 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/kclient_workunit_snaps.yaml} | 3 | |
Failure Reason:
"2018-12-05 16:08:51.541795 mon.a mon.0 158.69.66.202:6789/0 411 : cluster [ERR] Health check failed: mon a is very low on available space (MON_DISK_CRIT)" in cluster log |
||||||||||||||
fail | 3288632 | 2018-11-28 05:22:00 | 2018-12-05 14:49:23 | 2018-12-05 15:07:23 | 0:18:00 | ovh | master | ubuntu | 18.04 | kcephfs/thrash/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} thrashers/mds.yaml thrashosds-health.yaml whitelist_health.yaml workloads/kclient_workunit_suites_iozone.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh002.front.sepia.ceph.com |
||||||||||||||
fail | 3288633 | 2018-11-28 05:22:01 | 2018-12-05 14:53:16 | 2018-12-05 16:35:17 | 1:42:01 | ovh | master | ubuntu | 18.04 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/client-recovery.yaml whitelist_health.yaml} | 6 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh052.front.sepia.ceph.com |
||||||||||||||
pass | 3288634 | 2018-11-28 05:22:01 | 2018-12-05 14:53:38 | 2018-12-05 16:23:39 | 1:30:01 | 0:47:07 | 0:42:54 | ovh | master | centos | 7.4 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/kclient_workunit_suites_dbench.yaml} | 3 | |
fail | 3288635 | 2018-11-28 05:22:02 | 2018-12-05 14:53:44 | 2018-12-05 15:53:44 | 1:00:00 | 0:07:44 | 0:52:16 | ovh | master | rhel | 7.5 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/kclient_workunit_suites_ffsb.yaml} | 3 | |
Failure Reason:
Command failed on ovh035 with status 1: '\n sudo yum -y install bison\n ' |
||||||||||||||
fail | 3288636 | 2018-11-28 05:22:03 | 2018-12-05 14:58:24 | 2018-12-05 17:04:30 | 2:06:06 | 0:09:11 | 1:56:55 | ovh | master | rhel | 7.5 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/config-commands.yaml whitelist_health.yaml} | 6 | |
Failure Reason:
Command failed on ovh041 with status 1: '\n sudo yum -y install bison\n ' |
||||||||||||||
fail | 3288637 | 2018-11-28 05:22:04 | 2018-12-05 15:02:06 | 2018-12-05 15:20:05 | 0:17:59 | ovh | master | ubuntu | 18.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/kclient_workunit_suites_fsstress.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh090.front.sepia.ceph.com |
||||||||||||||
pass | 3288638 | 2018-11-28 05:22:04 | 2018-12-05 15:03:35 | 2018-12-05 17:01:36 | 1:58:01 | 1:39:16 | 0:18:45 | ovh | master | centos | 7.4 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/kclient_workunit_suites_fsx.yaml} | 3 | |
fail | 3288639 | 2018-11-28 05:22:05 | 2018-12-05 15:03:38 | 2018-12-05 21:03:43 | 6:00:05 | 0:10:01 | 5:50:04 | ovh | master | rhel | 7.5 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/damage.yaml whitelist_health.yaml} | 6 | |
Failure Reason:
Command failed on ovh075 with status 1: '\n sudo yum -y install bison\n ' |
||||||||||||||
pass | 3288640 | 2018-11-28 05:22:06 | 2018-12-05 15:07:08 | 2018-12-05 16:21:08 | 1:14:00 | 0:36:53 | 0:37:07 | ovh | master | centos | 7.4 | kcephfs/thrash/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} thrashers/mon.yaml thrashosds-health.yaml whitelist_health.yaml workloads/kclient_workunit_suites_ffsb.yaml} | 3 | |
fail | 3288641 | 2018-11-28 05:22:07 | 2018-12-05 15:07:24 | 2018-12-05 16:05:24 | 0:58:00 | 0:07:52 | 0:50:08 | ovh | master | rhel | 7.5 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/kclient_workunit_suites_fsync.yaml} | 3 | |
Failure Reason:
Command failed on ovh063 with status 1: '\n sudo yum -y install bison\n ' |
||||||||||||||
fail | 3288642 | 2018-11-28 05:22:08 | 2018-12-05 15:11:37 | 2018-12-05 21:45:42 | 6:34:05 | ovh | master | ubuntu | 18.04 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/data-scan.yaml whitelist_health.yaml} | 6 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh069.front.sepia.ceph.com |
||||||||||||||
fail | 3288643 | 2018-11-28 05:22:08 | 2018-12-05 15:12:22 | 2018-12-05 16:08:22 | 0:56:00 | 0:07:28 | 0:48:32 | ovh | master | rhel | 7.5 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/kclient_workunit_suites_iozone.yaml} | 3 | |
Failure Reason:
Command failed on ovh046 with status 1: '\n sudo yum -y install bison\n ' |
||||||||||||||
pass | 3288644 | 2018-11-28 05:22:09 | 2018-12-05 15:15:27 | 2018-12-05 15:49:27 | 0:34:00 | 0:18:07 | 0:15:53 | ovh | master | centos | 7.4 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/kclient_workunit_suites_pjd.yaml} | 3 | |
fail | 3288645 | 2018-11-28 05:22:10 | 2018-12-05 15:20:18 | 2018-12-05 21:50:23 | 6:30:05 | 0:09:49 | 6:20:16 | ovh | master | rhel | 7.5 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/failover.yaml whitelist_health.yaml} | 6 | |
Failure Reason:
Command failed on ovh049 with status 1: '\n sudo yum -y install bison\n ' |
||||||||||||||
pass | 3288646 | 2018-11-28 05:22:11 | 2018-12-05 15:21:03 | 2018-12-05 15:59:02 | 0:37:59 | 0:15:21 | 0:22:38 | ovh | master | ubuntu | 16.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/kclient_workunit_trivial_sync.yaml} | 3 | |
fail | 3288647 | 2018-11-28 05:22:12 | 2018-12-05 15:22:57 | 2018-12-05 15:36:56 | 0:13:59 | ovh | master | ubuntu | 18.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/kclient_workunit_direct_io.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh080.front.sepia.ceph.com |
||||||||||||||
pass | 3288648 | 2018-11-28 05:22:12 | 2018-12-05 15:25:39 | 2018-12-05 16:51:39 | 1:26:00 | 1:10:24 | 0:15:36 | ovh | master | ubuntu | 16.04 | kcephfs/mixed-clients/{begin.yaml clusters/1-mds-2-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/kernel_cfuse_workunits_untarbuild_blogbench.yaml} | 4 | |
pass | 3288649 | 2018-11-28 05:22:13 | 2018-12-05 15:37:11 | 2018-12-06 00:39:19 | 9:02:08 | 0:26:58 | 8:35:10 | ovh | master | centos | 7.4 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/forward-scrub.yaml whitelist_health.yaml} | 6 | |
fail | 3288650 | 2018-11-28 05:22:14 | 2018-12-05 15:37:27 | 2018-12-05 16:13:27 | 0:36:00 | 0:11:28 | 0:24:32 | ovh | master | ubuntu | 18.04 | kcephfs/thrash/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} thrashers/default.yaml thrashosds-health.yaml whitelist_health.yaml workloads/kclient_workunit_suites_iozone.yaml} | 3 | |
Failure Reason:
{'ovh053.front.sepia.ceph.com': {'_ansible_parsed': True, 'stderr_lines': ['/bin/sh: 1: ifdown: not found'], 'cmd': 'ifdown ens3 && ifup ens3', 'end': '2018-12-05 15:57:27.174636', '_ansible_no_log': False, 'stdout': '', 'changed': True, 'invocation': {'module_args': {'creates': None, 'executable': None, 'chdir': None, '_raw_params': 'ifdown ens3 && ifup ens3', 'removes': None, 'warn': True, '_uses_shell': True, 'stdin': None}}, 'start': '2018-12-05 15:57:27.168981', 'delta': '0:00:00.005655', 'stderr': '/bin/sh: 1: ifdown: not found', 'rc': 127, 'msg': 'non-zero return code', 'stdout_lines': []}, 'ovh080.front.sepia.ceph.com': {'_ansible_parsed': True, 'stderr_lines': ['/bin/sh: 1: ifdown: not found'], 'cmd': 'ifdown ens3 && ifup ens3', 'end': '2018-12-05 15:58:30.159274', '_ansible_no_log': False, 'stdout': '', 'changed': True, 'invocation': {'module_args': {'creates': None, 'executable': None, 'chdir': None, '_raw_params': 'ifdown ens3 && ifup ens3', 'removes': None, 'warn': True, '_uses_shell': True, 'stdin': None}}, 'start': '2018-12-05 15:58:30.153982', 'delta': '0:00:00.005292', 'stderr': '/bin/sh: 1: ifdown: not found', 'rc': 127, 'msg': 'non-zero return code', 'stdout_lines': []}, 'ovh060.front.sepia.ceph.com': {'_ansible_parsed': True, 'stderr_lines': ['/bin/sh: 1: ifdown: not found'], 'cmd': 'ifdown ens3 && ifup ens3', 'end': '2018-12-05 15:58:06.298600', '_ansible_no_log': False, 'stdout': '', 'changed': True, 'invocation': {'module_args': {'creates': None, 'executable': None, 'chdir': None, '_raw_params': 'ifdown ens3 && ifup ens3', 'removes': None, 'warn': True, '_uses_shell': True, 'stdin': None}}, 'start': '2018-12-05 15:58:06.292084', 'delta': '0:00:00.006516', 'stderr': '/bin/sh: 1: ifdown: not found', 'rc': 127, 'msg': 'non-zero return code', 'stdout_lines': []}} |
||||||||||||||
pass | 3288651 | 2018-11-28 05:22:15 | 2018-12-05 15:38:06 | 2018-12-05 16:50:06 | 1:12:00 | 0:52:54 | 0:19:06 | ovh | master | centos | 7.4 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/kclient_workunit_kernel_untar_build.yaml} | 3 | |
dead | 3288652 | 2018-11-28 05:22:16 | 2018-12-05 15:43:15 | 2018-12-06 03:50:48 | 12:07:33 | ovh | master | ubuntu | 16.04 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/journal-repair.yaml whitelist_health.yaml} | 6 | |||
fail | 3288653 | 2018-11-28 05:22:16 | 2018-12-05 15:43:41 | 2018-12-05 17:07:42 | 1:24:01 | 1:04:22 | 0:19:39 | ovh | master | centos | 7.4 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/kclient_workunit_misc.yaml} | 3 | |
Failure Reason:
"2018-12-05 16:45:26.464237 mon.a mon.0 158.69.65.223:6789/0 191 : cluster [WRN] Health check failed: Degraded data redundancy: 8402/88584 objects degraded (9.485%), 5 pgs degraded (PG_DEGRADED)" in cluster log |
||||||||||||||
fail | 3288654 | 2018-11-28 05:22:17 | 2018-12-05 15:46:20 | 2018-12-05 16:06:20 | 0:20:00 | ovh | master | ubuntu | 18.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/kclient_workunit_o_trunc.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh070.front.sepia.ceph.com |
||||||||||||||
pass | 3288655 | 2018-11-28 05:22:18 | 2018-12-05 15:49:39 | 2018-12-05 22:01:44 | 6:12:05 | 0:20:21 | 5:51:44 | ovh | master | ubuntu | 16.04 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/mds-flush.yaml whitelist_health.yaml} | 6 | |
pass | 3288656 | 2018-11-28 05:22:19 | 2018-12-05 15:53:57 | 2018-12-05 17:47:58 | 1:54:01 | 1:30:49 | 0:23:12 | ovh | master | centos | 7.4 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/kclient_workunit_snaps.yaml} | 3 | |
fail | 3288657 | 2018-11-28 05:22:20 | 2018-12-05 15:59:15 | 2018-12-05 16:19:15 | 0:20:00 | ovh | master | ubuntu | 18.04 | kcephfs/thrash/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} thrashers/mds.yaml thrashosds-health.yaml whitelist_health.yaml workloads/kclient_workunit_suites_ffsb.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh080.front.sepia.ceph.com |
||||||||||||||
fail | 3288658 | 2018-11-28 05:22:20 | 2018-12-05 16:02:53 | 2018-12-06 00:23:00 | 8:20:07 | 0:23:50 | 7:56:17 | ovh | master | ubuntu | 16.04 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/mds-full.yaml whitelist_health.yaml} | 6 | |
Failure Reason:
Test failure: test_full_fclose (tasks.cephfs.test_full.TestClusterFull) |
||||||||||||||
pass | 3288659 | 2018-11-28 05:22:21 | 2018-12-05 16:05:36 | 2018-12-05 17:05:36 | 1:00:00 | 0:46:11 | 0:13:49 | ovh | master | centos | 7.4 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/kclient_workunit_suites_dbench.yaml} | 3 | |
fail | 3288660 | 2018-11-28 05:22:22 | 2018-12-05 16:06:21 | 2018-12-05 17:08:21 | 1:02:00 | 0:48:03 | 0:13:57 | ovh | master | ubuntu | 16.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/kclient_workunit_suites_ffsb.yaml} | 3 | |
Failure Reason:
"2018-12-05 16:45:01.471144 mon.a mon.0 158.69.64.250:6789/0 168 : cluster [WRN] Health check failed: 1 MDSs report slow metadata IOs (MDS_SLOW_METADATA_IO)" in cluster log |
||||||||||||||
fail | 3288661 | 2018-11-28 05:22:23 | 2018-12-05 16:08:35 | 2018-12-05 19:24:37 | 3:16:02 | 0:10:09 | 3:05:53 | ovh | master | rhel | 7.5 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/pool-perm.yaml whitelist_health.yaml} | 6 | |
Failure Reason:
Command failed on ovh012 with status 1: '\n sudo yum -y install bison\n ' |
||||||||||||||
fail | 3288662 | 2018-11-28 05:22:24 | 2018-12-05 16:08:35 | 2018-12-05 16:36:35 | 0:28:00 | ovh | master | ubuntu | 18.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/kclient_workunit_suites_fsstress.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh060.front.sepia.ceph.com |
||||||||||||||
pass | 3288663 | 2018-11-28 05:22:24 | 2018-12-05 16:08:35 | 2018-12-05 17:26:35 | 1:18:00 | 0:45:24 | 0:32:36 | ovh | master | centos | 7.4 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/kclient_workunit_suites_fsx.yaml} | 3 | |
pass | 3288664 | 2018-11-28 05:22:25 | 2018-12-05 16:13:38 | 2018-12-05 19:09:40 | 2:56:02 | 0:23:26 | 2:32:36 | ovh | master | ubuntu | 16.04 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/sessionmap.yaml whitelist_health.yaml} | 6 | |
fail | 3288665 | 2018-11-28 05:22:26 | 2018-12-05 16:19:27 | 2018-12-05 17:39:27 | 1:20:00 | 0:54:11 | 0:25:49 | ovh | master | centos | 7.4 | kcephfs/thrash/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} thrashers/mon.yaml thrashosds-health.yaml whitelist_health.yaml workloads/kclient_workunit_suites_iozone.yaml} | 3 | |
Failure Reason:
"2018-12-05 17:01:40.958658 mon.a mon.0 158.69.68.202:6789/0 181 : cluster [WRN] Health check failed: 1 MDSs report slow metadata IOs (MDS_SLOW_METADATA_IO)" in cluster log |
||||||||||||||
pass | 3288666 | 2018-11-28 05:22:27 | 2018-12-05 16:21:20 | 2018-12-05 16:57:19 | 0:35:59 | 0:18:38 | 0:17:21 | ovh | master | centos | 7.4 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/kclient_workunit_suites_fsync.yaml} | 3 | |
pass | 3288667 | 2018-11-28 05:22:28 | 2018-12-05 16:23:31 | 2018-12-05 23:17:37 | 6:54:06 | 0:45:23 | 6:08:43 | ovh | master | centos | 7.4 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/strays.yaml whitelist_health.yaml} | 6 | |
fail | 3288668 | 2018-11-28 05:22:28 | 2018-12-05 16:23:39 | 2018-12-05 16:47:39 | 0:24:00 | ovh | master | ubuntu | 18.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/kclient_workunit_suites_iozone.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh038.front.sepia.ceph.com |
||||||||||||||
fail | 3288669 | 2018-11-28 05:22:29 | 2018-12-05 16:25:06 | 2018-12-05 17:33:07 | 1:08:01 | 0:07:32 | 1:00:29 | ovh | master | rhel | 7.5 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/kclient_workunit_suites_pjd.yaml} | 3 | |
Failure Reason:
Command failed on ovh011 with status 1: '\n sudo yum -y install bison\n ' |
||||||||||||||
fail | 3288670 | 2018-11-28 05:22:30 | 2018-12-05 16:25:24 | 2018-12-06 00:19:31 | 7:54:07 | ovh | master | ubuntu | 18.04 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/volume-client.yaml whitelist_health.yaml} | 6 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh065.front.sepia.ceph.com |
||||||||||||||
fail | 3288671 | 2018-11-28 05:22:31 | 2018-12-05 16:31:23 | 2018-12-05 17:21:23 | 0:50:00 | 0:07:25 | 0:42:35 | ovh | master | rhel | 7.5 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/kclient_workunit_trivial_sync.yaml} | 3 | |
Failure Reason:
Command failed on ovh069 with status 1: '\n sudo yum -y install bison\n ' |
||||||||||||||
fail | 3288672 | 2018-11-28 05:22:32 | 2018-12-05 16:35:28 | 2018-12-05 16:59:28 | 0:24:00 | ovh | master | ubuntu | 18.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/kclient_workunit_direct_io.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh046.front.sepia.ceph.com |
||||||||||||||
pass | 3288673 | 2018-11-28 05:22:32 | 2018-12-05 16:36:46 | 2018-12-05 17:54:47 | 1:18:01 | 1:03:36 | 0:14:25 | ovh | master | ubuntu | 16.04 | kcephfs/mixed-clients/{begin.yaml clusters/1-mds-2-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/kernel_cfuse_workunits_dbench_iozone.yaml} | 4 | |
pass | 3288674 | 2018-11-28 05:22:33 | 2018-12-05 16:47:44 | 2018-12-05 22:49:49 | 6:02:05 | 0:20:03 | 5:42:02 | ovh | master | ubuntu | 16.04 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/auto-repair.yaml whitelist_health.yaml} | 6 | |
fail | 3288675 | 2018-11-28 05:22:34 | 2018-12-05 16:47:44 | 2018-12-05 17:15:44 | 0:28:00 | 0:10:22 | 0:17:38 | ovh | master | ubuntu | 18.04 | kcephfs/thrash/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} thrashers/default.yaml thrashosds-health.yaml whitelist_health.yaml workloads/kclient_workunit_suites_ffsb.yaml} | 3 | |
Failure Reason:
{'ovh067.front.sepia.ceph.com': {'_ansible_parsed': True, 'stderr_lines': ['/bin/sh: 1: ifdown: not found'], 'cmd': 'ifdown ens3 && ifup ens3', 'end': '2018-12-05 17:09:02.064149', '_ansible_no_log': False, 'stdout': '', 'changed': True, 'invocation': {'module_args': {'creates': None, 'executable': None, 'chdir': None, '_raw_params': 'ifdown ens3 && ifup ens3', 'removes': None, 'warn': True, '_uses_shell': True, 'stdin': None}}, 'start': '2018-12-05 17:09:02.058647', 'delta': '0:00:00.005502', 'stderr': '/bin/sh: 1: ifdown: not found', 'rc': 127, 'msg': 'non-zero return code', 'stdout_lines': []}, 'ovh032.front.sepia.ceph.com': {'_ansible_parsed': True, 'stderr_lines': ['/bin/sh: 1: ifdown: not found'], 'cmd': 'ifdown ens3 && ifup ens3', 'end': '2018-12-05 17:09:04.438971', '_ansible_no_log': False, 'stdout': '', 'changed': True, 'invocation': {'module_args': {'creates': None, 'executable': None, 'chdir': None, '_raw_params': 'ifdown ens3 && ifup ens3', 'removes': None, 'warn': True, '_uses_shell': True, 'stdin': None}}, 'start': '2018-12-05 17:09:04.433718', 'delta': '0:00:00.005253', 'stderr': '/bin/sh: 1: ifdown: not found', 'rc': 127, 'msg': 'non-zero return code', 'stdout_lines': []}, 'ovh094.front.sepia.ceph.com': {'_ansible_parsed': True, 'stderr_lines': ['/bin/sh: 1: ifdown: not found'], 'cmd': 'ifdown ens3 && ifup ens3', 'end': '2018-12-05 17:09:23.722428', '_ansible_no_log': False, 'stdout': '', 'changed': True, 'invocation': {'module_args': {'creates': None, 'executable': None, 'chdir': None, '_raw_params': 'ifdown ens3 && ifup ens3', 'removes': None, 'warn': True, '_uses_shell': True, 'stdin': None}}, 'start': '2018-12-05 17:09:23.716541', 'delta': '0:00:00.005887', 'stderr': '/bin/sh: 1: ifdown: not found', 'rc': 127, 'msg': 'non-zero return code', 'stdout_lines': []}} |
||||||||||||||
pass | 3288676 | 2018-11-28 05:22:35 | 2018-12-05 16:50:17 | 2018-12-05 18:22:18 | 1:32:01 | 0:53:23 | 0:38:38 | ovh | master | centos | 7.4 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/kclient_workunit_kernel_untar_build.yaml} | 3 | |
fail | 3288677 | 2018-11-28 05:22:36 | 2018-12-05 16:51:50 | 2018-12-05 17:53:50 | 1:02:00 | 0:15:44 | 0:46:16 | ovh | master | ubuntu | 18.04 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/backtrace.yaml whitelist_health.yaml} | 6 | |
Failure Reason:
{'ovh048.front.sepia.ceph.com': {'_ansible_parsed': True, 'stderr_lines': ['/bin/sh: 1: ifdown: not found'], 'cmd': 'ifdown ens3 && ifup ens3', 'end': '2018-12-05 17:42:44.023122', '_ansible_no_log': False, 'stdout': '', 'changed': True, 'invocation': {'module_args': {'creates': None, 'executable': None, 'chdir': None, '_raw_params': 'ifdown ens3 && ifup ens3', 'removes': None, 'warn': True, '_uses_shell': True, 'stdin': None}}, 'start': '2018-12-05 17:42:44.016956', 'delta': '0:00:00.006166', 'stderr': '/bin/sh: 1: ifdown: not found', 'rc': 127, 'msg': 'non-zero return code', 'stdout_lines': []}, 'ovh067.front.sepia.ceph.com': {'_ansible_parsed': True, 'stderr_lines': ['/bin/sh: 1: ifdown: not found'], 'cmd': 'ifdown ens3 && ifup ens3', 'end': '2018-12-05 17:44:44.487138', '_ansible_no_log': False, 'stdout': '', 'changed': True, 'invocation': {'module_args': {'creates': None, 'executable': None, 'chdir': None, '_raw_params': 'ifdown ens3 && ifup ens3', 'removes': None, 'warn': True, '_uses_shell': True, 'stdin': None}}, 'start': '2018-12-05 17:44:44.480994', 'delta': '0:00:00.006144', 'stderr': '/bin/sh: 1: ifdown: not found', 'rc': 127, 'msg': 'non-zero return code', 'stdout_lines': []}, 'ovh034.front.sepia.ceph.com': {'_ansible_parsed': True, 'stderr_lines': ['/bin/sh: 1: ifdown: not found'], 'cmd': 'ifdown ens3 && ifup ens3', 'end': '2018-12-05 17:43:25.229034', '_ansible_no_log': False, 'stdout': '', 'changed': True, 'invocation': {'module_args': {'creates': None, 'executable': None, 'chdir': None, '_raw_params': 'ifdown ens3 && ifup ens3', 'removes': None, 'warn': True, '_uses_shell': True, 'stdin': None}}, 'start': '2018-12-05 17:43:25.223041', 'delta': '0:00:00.005993', 'stderr': '/bin/sh: 1: ifdown: not found', 'rc': 127, 'msg': 'non-zero return code', 'stdout_lines': []}, 'ovh032.front.sepia.ceph.com': {'_ansible_parsed': True, 'stderr_lines': ['/bin/sh: 1: ifdown: not found'], 'cmd': 'ifdown ens3 && ifup ens3', 'end': '2018-12-05 17:43:28.954763', '_ansible_no_log': False, 'stdout': '', 'changed': True, 'invocation': {'module_args': {'creates': None, 'executable': None, 'chdir': None, '_raw_params': 'ifdown ens3 && ifup ens3', 'removes': None, 'warn': True, '_uses_shell': True, 'stdin': None}}, 'start': '2018-12-05 17:43:28.948798', 'delta': '0:00:00.005965', 'stderr': '/bin/sh: 1: ifdown: not found', 'rc': 127, 'msg': 'non-zero return code', 'stdout_lines': []}, 'ovh036.front.sepia.ceph.com': {'_ansible_parsed': True, 'stderr_lines': ['/bin/sh: 1: ifdown: not found'], 'cmd': 'ifdown ens3 && ifup ens3', 'end': '2018-12-05 17:42:10.342497', '_ansible_no_log': False, 'stdout': '', 'changed': True, 'invocation': {'module_args': {'creates': None, 'executable': None, 'chdir': None, '_raw_params': 'ifdown ens3 && ifup ens3', 'removes': None, 'warn': True, '_uses_shell': True, 'stdin': None}}, 'start': '2018-12-05 17:42:10.336747', 'delta': '0:00:00.005750', 'stderr': '/bin/sh: 1: ifdown: not found', 'rc': 127, 'msg': 'non-zero return code', 'stdout_lines': []}, 'ovh094.front.sepia.ceph.com': {'_ansible_parsed': True, 'stderr_lines': ['/bin/sh: 1: ifdown: not found'], 'cmd': 'ifdown ens3 && ifup ens3', 'end': '2018-12-05 17:46:59.940953', '_ansible_no_log': False, 'stdout': '', 'changed': True, 'invocation': {'module_args': {'creates': None, 'executable': None, 'chdir': None, '_raw_params': 'ifdown ens3 && ifup ens3', 'removes': None, 'warn': True, '_uses_shell': True, 'stdin': None}}, 'start': '2018-12-05 17:46:59.934269', 'delta': '0:00:00.006684', 'stderr': '/bin/sh: 1: ifdown: not found', 'rc': 127, 'msg': 'non-zero return code', 'stdout_lines': []}} |
||||||||||||||
fail | 3288678 | 2018-11-28 05:22:37 | 2018-12-05 16:57:32 | 2018-12-05 17:19:32 | 0:22:00 | 0:10:54 | 0:11:06 | ovh | master | ubuntu | 18.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/kclient_workunit_misc.yaml} | 3 | |
Failure Reason:
{'ovh072.front.sepia.ceph.com': {'_ansible_parsed': True, 'stderr_lines': ['/bin/sh: 1: ifdown: not found'], 'cmd': 'ifdown ens3 && ifup ens3', 'end': '2018-12-05 17:18:23.776015', '_ansible_no_log': False, 'stdout': '', 'changed': True, 'invocation': {'module_args': {'creates': None, 'executable': None, 'chdir': None, '_raw_params': 'ifdown ens3 && ifup ens3', 'removes': None, 'warn': True, '_uses_shell': True, 'stdin': None}}, 'start': '2018-12-05 17:18:23.770028', 'delta': '0:00:00.005987', 'stderr': '/bin/sh: 1: ifdown: not found', 'rc': 127, 'msg': 'non-zero return code', 'stdout_lines': []}, 'ovh091.front.sepia.ceph.com': {'_ansible_parsed': True, 'stderr_lines': ['/bin/sh: 1: ifdown: not found'], 'cmd': 'ifdown ens3 && ifup ens3', 'end': '2018-12-05 17:18:05.500334', '_ansible_no_log': False, 'stdout': '', 'changed': True, 'invocation': {'module_args': {'creates': None, 'executable': None, 'chdir': None, '_raw_params': 'ifdown ens3 && ifup ens3', 'removes': None, 'warn': True, '_uses_shell': True, 'stdin': None}}, 'start': '2018-12-05 17:18:05.494847', 'delta': '0:00:00.005487', 'stderr': '/bin/sh: 1: ifdown: not found', 'rc': 127, 'msg': 'non-zero return code', 'stdout_lines': []}, 'ovh002.front.sepia.ceph.com': {'_ansible_parsed': True, 'stderr_lines': ['/bin/sh: 1: ifdown: not found'], 'cmd': 'ifdown ens3 && ifup ens3', 'end': '2018-12-05 17:18:55.552833', '_ansible_no_log': False, 'stdout': '', 'changed': True, 'invocation': {'module_args': {'creates': None, 'executable': None, 'chdir': None, '_raw_params': 'ifdown ens3 && ifup ens3', 'removes': None, 'warn': True, '_uses_shell': True, 'stdin': None}}, 'start': '2018-12-05 17:18:55.547222', 'delta': '0:00:00.005611', 'stderr': '/bin/sh: 1: ifdown: not found', 'rc': 127, 'msg': 'non-zero return code', 'stdout_lines': []}} |
||||||||||||||
fail | 3288679 | 2018-11-28 05:22:37 | 2018-12-05 16:59:41 | 2018-12-05 18:01:41 | 1:02:00 | 0:08:21 | 0:53:39 | ovh | master | rhel | 7.5 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/kclient_workunit_o_trunc.yaml} | 3 | |
Failure Reason:
Command failed on ovh061 with status 1: '\n sudo yum -y install bison\n ' |
||||||||||||||
fail | 3288680 | 2018-11-28 05:22:38 | 2018-12-05 17:01:48 | 2018-12-05 19:17:49 | 2:16:01 | ovh | master | ubuntu | 18.04 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/client-limits.yaml whitelist_health.yaml} | 6 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh063.front.sepia.ceph.com |
||||||||||||||
fail | 3288681 | 2018-11-28 05:22:39 | 2018-12-05 17:04:41 | 2018-12-05 18:58:42 | 1:54:01 | 1:40:36 | 0:13:25 | ovh | master | ubuntu | 16.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/kclient_workunit_snaps.yaml} | 3 | |
Failure Reason:
"2018-12-05 17:27:50.792454 mon.b mon.0 158.69.68.229:6789/0 196 : cluster [WRN] Health check failed: 1 slow ops, oldest one blocked for 121 sec, mon.c has slow ops (SLOW_OPS)" in cluster log |
||||||||||||||
fail | 3288682 | 2018-11-28 05:22:40 | 2018-12-05 17:05:37 | 2018-12-05 18:03:37 | 0:58:00 | 0:07:15 | 0:50:45 | ovh | master | rhel | 7.5 | kcephfs/thrash/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} thrashers/mds.yaml thrashosds-health.yaml whitelist_health.yaml workloads/kclient_workunit_suites_iozone.yaml} | 3 | |
Failure Reason:
Command failed on ovh082 with status 1: '\n sudo yum -y install bison\n ' |
||||||||||||||
dead | 3288683 | 2018-11-28 05:22:41 | 2018-12-05 17:07:53 | 2018-12-06 05:20:00 | 12:12:07 | ovh | master | centos | 7.4 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/client-recovery.yaml whitelist_health.yaml} | 6 | |||
pass | 3288684 | 2018-11-28 05:22:41 | 2018-12-05 17:08:22 | 2018-12-05 18:18:22 | 1:10:00 | 0:44:16 | 0:25:44 | ovh | master | centos | 7.4 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/kclient_workunit_suites_dbench.yaml} | 3 | |
pass | 3288685 | 2018-11-28 05:22:42 | 2018-12-05 17:15:55 | 2018-12-05 18:25:55 | 1:10:00 | 0:41:04 | 0:28:56 | ovh | master | centos | 7.4 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/kclient_workunit_suites_ffsb.yaml} | 3 | |
fail | 3288686 | 2018-11-28 05:22:43 | 2018-12-05 17:18:38 | 2018-12-05 18:28:39 | 1:10:01 | ovh | master | ubuntu | 18.04 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/config-commands.yaml whitelist_health.yaml} | 6 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh067.front.sepia.ceph.com |
||||||||||||||
pass | 3288687 | 2018-11-28 05:22:44 | 2018-12-05 17:19:33 | 2018-12-05 17:59:33 | 0:40:00 | 0:26:55 | 0:13:05 | ovh | master | centos | 7.4 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/kclient_workunit_suites_fsstress.yaml} | 3 | |
fail | 3288688 | 2018-11-28 05:22:45 | 2018-12-05 17:21:34 | 2018-12-05 17:51:34 | 0:30:00 | ovh | master | ubuntu | 18.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/kclient_workunit_suites_fsx.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh050.front.sepia.ceph.com |
||||||||||||||
fail | 3288689 | 2018-11-28 05:22:46 | 2018-12-05 17:26:47 | 2018-12-05 20:52:49 | 3:26:02 | 0:09:24 | 3:16:38 | ovh | master | rhel | 7.5 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/damage.yaml whitelist_health.yaml} | 6 | |
Failure Reason:
Command failed on ovh080 with status 1: '\n sudo yum -y install bison\n ' |
||||||||||||||
fail | 3288690 | 2018-11-28 05:22:46 | 2018-12-05 17:33:20 | 2018-12-05 18:59:20 | 1:26:00 | 1:08:08 | 0:17:52 | ovh | master | ubuntu | 16.04 | kcephfs/thrash/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} thrashers/mon.yaml thrashosds-health.yaml whitelist_health.yaml workloads/kclient_workunit_suites_ffsb.yaml} | 3 | |
Failure Reason:
"2018-12-05 18:05:45.398627 mon.a mon.0 158.69.66.43:6789/0 34 : cluster [WRN] Health check failed: 1 MDSs report slow metadata IOs (MDS_SLOW_METADATA_IO)" in cluster log |
||||||||||||||
pass | 3288691 | 2018-11-28 05:22:47 | 2018-12-05 17:34:12 | 2018-12-05 18:06:12 | 0:32:00 | 0:16:00 | 0:16:00 | ovh | master | centos | 7.4 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/kclient_workunit_suites_fsync.yaml} | 3 | |
pass | 3288692 | 2018-11-28 05:22:48 | 2018-12-05 17:37:22 | 2018-12-06 00:09:27 | 6:32:05 | 0:38:31 | 5:53:34 | ovh | master | centos | 7.4 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/data-scan.yaml whitelist_health.yaml} | 6 | |
pass | 3288693 | 2018-11-28 05:22:49 | 2018-12-05 17:39:29 | 2018-12-05 18:49:29 | 1:10:00 | 0:48:53 | 0:21:07 | ovh | master | ubuntu | 16.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/kclient_workunit_suites_iozone.yaml} | 3 | |
pass | 3288694 | 2018-11-28 05:22:49 | 2018-12-05 17:48:00 | 2018-12-05 18:17:59 | 0:29:59 | 0:17:13 | 0:12:46 | ovh | master | centos | 7.4 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/kclient_workunit_suites_pjd.yaml} | 3 | |
dead | 3288695 | 2018-11-28 05:22:50 | 2018-12-05 17:51:45 | 2018-12-06 05:53:57 | 12:02:12 | ovh | master | ubuntu | 18.04 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/failover.yaml whitelist_health.yaml} | — | |||
pass | 3288696 | 2018-11-28 05:22:51 | 2018-12-05 17:54:02 | 2018-12-05 18:28:01 | 0:33:59 | 0:15:14 | 0:18:45 | ovh | master | ubuntu | 16.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/kclient_workunit_trivial_sync.yaml} | 3 | |
fail | 3288697 | 2018-11-28 05:22:52 | 2018-12-05 17:54:48 | 2018-12-05 18:30:48 | 0:36:00 | ovh | master | ubuntu | 18.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/kclient_workunit_direct_io.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh030.front.sepia.ceph.com |
||||||||||||||
fail | 3288698 | 2018-11-28 05:22:53 | 2018-12-05 17:59:44 | 2018-12-05 18:37:44 | 0:38:00 | ovh | master | ubuntu | 18.04 | kcephfs/mixed-clients/{begin.yaml clusters/1-mds-2-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/kernel_cfuse_workunits_untarbuild_blogbench.yaml} | 4 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh044.front.sepia.ceph.com |
||||||||||||||
pass | 3288699 | 2018-11-28 05:22:53 | 2018-12-05 18:01:52 | 2018-12-05 22:37:56 | 4:36:04 | 0:24:14 | 4:11:50 | ovh | master | ubuntu | 16.04 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/forward-scrub.yaml whitelist_health.yaml} | 6 | |
pass | 3288700 | 2018-11-28 05:22:54 | 2018-12-05 18:01:52 | 2018-12-05 19:15:53 | 1:14:01 | 0:47:49 | 0:26:12 | ovh | master | ubuntu | 16.04 | kcephfs/thrash/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} thrashers/default.yaml thrashosds-health.yaml whitelist_health.yaml workloads/kclient_workunit_suites_iozone.yaml} | 3 | |
fail | 3288701 | 2018-11-28 05:22:55 | 2018-12-05 18:03:49 | 2018-12-05 18:21:48 | 0:17:59 | ovh | master | ubuntu | 18.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/kclient_workunit_kernel_untar_build.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh095.front.sepia.ceph.com |
||||||||||||||
fail | 3288702 | 2018-11-28 05:22:56 | 2018-12-05 18:06:24 | 2018-12-05 23:48:28 | 5:42:04 | 0:26:42 | 5:15:22 | ovh | master | ubuntu | 16.04 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/journal-repair.yaml whitelist_health.yaml} | 6 | |
Failure Reason:
Test failure: test_reset (tasks.cephfs.test_journal_repair.TestJournalRepair), test_reset (tasks.cephfs.test_journal_repair.TestJournalRepair) |
||||||||||||||
fail | 3288703 | 2018-11-28 05:22:56 | 2018-12-05 18:08:14 | 2018-12-05 19:02:14 | 0:54:00 | 0:07:27 | 0:46:33 | ovh | master | rhel | 7.5 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/kclient_workunit_misc.yaml} | 3 | |
Failure Reason:
Command failed on ovh052 with status 1: '\n sudo yum -y install bison\n ' |
||||||||||||||
pass | 3288704 | 2018-11-28 05:22:57 | 2018-12-05 18:18:10 | 2018-12-05 19:12:10 | 0:54:00 | 0:26:39 | 0:27:21 | ovh | master | centos | 7.4 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/kclient_workunit_o_trunc.yaml} | 3 | |
fail | 3288705 | 2018-11-28 05:22:58 | 2018-12-05 18:18:23 | 2018-12-05 20:52:25 | 2:34:02 | ovh | master | ubuntu | 18.04 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/mds-flush.yaml whitelist_health.yaml} | 6 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh065.front.sepia.ceph.com |
||||||||||||||
fail | 3288706 | 2018-11-28 05:22:59 | 2018-12-05 18:22:00 | 2018-12-05 19:20:00 | 0:58:00 | 0:07:45 | 0:50:15 | ovh | master | rhel | 7.5 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/kclient_workunit_snaps.yaml} | 3 | |
Failure Reason:
Command failed on ovh059 with status 1: '\n sudo yum -y install bison\n ' |
||||||||||||||
fail | 3288707 | 2018-11-28 05:23:00 | 2018-12-05 18:22:19 | 2018-12-05 18:38:18 | 0:15:59 | ovh | master | ubuntu | 18.04 | kcephfs/thrash/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} thrashers/mds.yaml thrashosds-health.yaml whitelist_health.yaml workloads/kclient_workunit_suites_ffsb.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh002.front.sepia.ceph.com |
||||||||||||||
pass | 3288708 | 2018-11-28 05:23:00 | 2018-12-05 18:26:06 | 2018-12-06 01:36:12 | 7:10:06 | 0:37:35 | 6:32:31 | ovh | master | centos | 7.4 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/mds-full.yaml whitelist_health.yaml} | 6 | |
pass | 3288709 | 2018-11-28 05:23:01 | 2018-12-05 18:28:12 | 2018-12-05 19:50:13 | 1:22:01 | 0:44:15 | 0:37:46 | ovh | master | centos | 7.4 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/kclient_workunit_suites_dbench.yaml} | 3 | |
fail | 3288710 | 2018-11-28 05:23:02 | 2018-12-05 18:28:40 | 2018-12-05 19:30:40 | 1:02:00 | 0:07:42 | 0:54:18 | ovh | master | rhel | 7.5 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/kclient_workunit_suites_ffsb.yaml} | 3 | |
Failure Reason:
Command failed on ovh033 with status 1: '\n sudo yum -y install bison\n ' |
||||||||||||||
fail | 3288711 | 2018-11-28 05:23:03 | 2018-12-05 18:31:00 | 2018-12-05 22:17:03 | 3:46:03 | ovh | master | ubuntu | 18.04 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/pool-perm.yaml whitelist_health.yaml} | 6 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh042.front.sepia.ceph.com |
||||||||||||||
pass | 3288712 | 2018-11-28 05:23:03 | 2018-12-05 18:31:00 | 2018-12-05 19:15:00 | 0:44:00 | 0:28:17 | 0:15:43 | ovh | master | ubuntu | 16.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/kclient_workunit_suites_fsstress.yaml} | 3 | |
pass | 3288713 | 2018-11-28 05:23:04 | 2018-12-05 18:37:55 | 2018-12-05 19:31:56 | 0:54:01 | 0:40:40 | 0:13:21 | ovh | master | centos | 7.4 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/kclient_workunit_suites_fsx.yaml} | 3 | |
fail | 3288714 | 2018-11-28 05:23:05 | 2018-12-05 18:38:19 | 2018-12-05 20:14:20 | 1:36:01 | ovh | master | ubuntu | 18.04 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/sessionmap.yaml whitelist_health.yaml} | 6 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh067.front.sepia.ceph.com |
||||||||||||||
fail | 3288715 | 2018-11-28 05:23:06 | 2018-12-05 18:46:14 | 2018-12-05 19:58:15 | 1:12:01 | 0:48:18 | 0:23:43 | ovh | master | centos | 7.4 | kcephfs/thrash/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} thrashers/mon.yaml thrashosds-health.yaml whitelist_health.yaml workloads/kclient_workunit_suites_iozone.yaml} | 3 | |
Failure Reason:
"2018-12-05 19:26:31.067698 mon.a mon.0 158.69.65.126:6789/0 32 : cluster [WRN] Health check failed: 1 MDSs report slow metadata IOs (MDS_SLOW_METADATA_IO)" in cluster log |
||||||||||||||
pass | 3288716 | 2018-11-28 05:23:06 | 2018-12-05 18:47:10 | 2018-12-05 19:17:09 | 0:29:59 | 0:16:42 | 0:13:17 | ovh | master | ubuntu | 16.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/kclient_workunit_suites_fsync.yaml} | 3 | |
fail | 3288717 | 2018-11-28 05:23:07 | 2018-12-05 18:49:41 | 2018-12-05 23:45:45 | 4:56:04 | ovh | master | ubuntu | 18.04 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/strays.yaml whitelist_health.yaml} | 6 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh097.front.sepia.ceph.com |
||||||||||||||
fail | 3288718 | 2018-11-28 05:23:08 | 2018-12-05 18:57:51 | 2018-12-05 19:53:51 | 0:56:00 | 0:08:27 | 0:47:33 | ovh | master | rhel | 7.5 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/kclient_workunit_suites_iozone.yaml} | 3 | |
Failure Reason:
Command failed on ovh091 with status 1: '\n sudo yum -y install bison\n ' |
||||||||||||||
fail | 3288719 | 2018-11-28 05:23:09 | 2018-12-05 18:58:43 | 2018-12-05 19:30:43 | 0:32:00 | 0:10:19 | 0:21:41 | ovh | master | ubuntu | 18.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/kclient_workunit_suites_pjd.yaml} | 3 | |
Failure Reason:
{'ovh090.front.sepia.ceph.com': {'_ansible_parsed': True, 'stderr_lines': ['/bin/sh: 1: ifdown: not found'], 'cmd': 'ifdown ens3 && ifup ens3', 'end': '2018-12-05 19:19:59.325622', '_ansible_no_log': False, 'stdout': '', 'changed': True, 'invocation': {'module_args': {'creates': None, 'executable': None, 'chdir': None, '_raw_params': 'ifdown ens3 && ifup ens3', 'removes': None, 'warn': True, '_uses_shell': True, 'stdin': None}}, 'start': '2018-12-05 19:19:59.320806', 'delta': '0:00:00.004816', 'stderr': '/bin/sh: 1: ifdown: not found', 'rc': 127, 'msg': 'non-zero return code', 'stdout_lines': []}, 'ovh028.front.sepia.ceph.com': {'_ansible_parsed': True, 'stderr_lines': ['/bin/sh: 1: ifdown: not found'], 'cmd': 'ifdown ens3 && ifup ens3', 'end': '2018-12-05 19:19:43.743229', '_ansible_no_log': False, 'stdout': '', 'changed': True, 'invocation': {'module_args': {'creates': None, 'executable': None, 'chdir': None, '_raw_params': 'ifdown ens3 && ifup ens3', 'removes': None, 'warn': True, '_uses_shell': True, 'stdin': None}}, 'start': '2018-12-05 19:19:43.737427', 'delta': '0:00:00.005802', 'stderr': '/bin/sh: 1: ifdown: not found', 'rc': 127, 'msg': 'non-zero return code', 'stdout_lines': []}, 'ovh035.front.sepia.ceph.com': {'_ansible_parsed': True, 'stderr_lines': ['/bin/sh: 1: ifdown: not found'], 'cmd': 'ifdown ens3 && ifup ens3', 'end': '2018-12-05 19:20:21.029011', '_ansible_no_log': False, 'stdout': '', 'changed': True, 'invocation': {'module_args': {'creates': None, 'executable': None, 'chdir': None, '_raw_params': 'ifdown ens3 && ifup ens3', 'removes': None, 'warn': True, '_uses_shell': True, 'stdin': None}}, 'start': '2018-12-05 19:20:21.025433', 'delta': '0:00:00.003578', 'stderr': '/bin/sh: 1: ifdown: not found', 'rc': 127, 'msg': 'non-zero return code', 'stdout_lines': []}} |
||||||||||||||
pass | 3288720 | 2018-11-28 05:23:09 | 2018-12-05 18:59:32 | 2018-12-05 21:27:33 | 2:28:01 | 0:42:59 | 1:45:02 | ovh | master | centos | 7.4 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/volume-client.yaml whitelist_health.yaml} | 6 | |
pass | 3288721 | 2018-11-28 05:23:10 | 2018-12-05 19:02:25 | 2018-12-05 19:42:25 | 0:40:00 | 0:15:15 | 0:24:45 | ovh | master | ubuntu | 16.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/kclient_workunit_trivial_sync.yaml} | 3 | |
pass | 3288722 | 2018-11-28 05:23:11 | 2018-12-05 19:09:52 | 2018-12-05 19:57:52 | 0:48:00 | 0:16:38 | 0:31:22 | ovh | master | centos | 7.4 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/kclient_workunit_direct_io.yaml} | 3 | |
fail | 3288723 | 2018-11-28 05:23:12 | 2018-12-05 19:12:22 | 2018-12-05 20:32:23 | 1:20:01 | 0:07:39 | 1:12:22 | ovh | master | rhel | 7.5 | kcephfs/mixed-clients/{begin.yaml clusters/1-mds-2-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/kernel_cfuse_workunits_dbench_iozone.yaml} | 4 | |
Failure Reason:
Command failed on ovh065 with status 1: '\n sudo yum -y install bison\n ' |
||||||||||||||
dead | 3288724 | 2018-11-28 05:23:12 | 2018-12-05 19:14:05 | 2018-12-06 07:16:16 | 12:02:11 | ovh | master | rhel | 7.5 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/auto-repair.yaml whitelist_health.yaml} | — | |||
pass | 3288725 | 2018-11-28 05:23:13 | 2018-12-05 19:15:01 | 2018-12-05 20:07:01 | 0:52:00 | 0:43:21 | 0:08:39 | ovh | master | ubuntu | 16.04 | kcephfs/thrash/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} thrashers/default.yaml thrashosds-health.yaml whitelist_health.yaml workloads/kclient_workunit_suites_ffsb.yaml} | 3 | |
pass | 3288726 | 2018-11-28 05:23:14 | 2018-12-05 19:16:04 | 2018-12-05 20:46:05 | 1:30:01 | 1:01:13 | 0:28:48 | ovh | master | ubuntu | 16.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/kclient_workunit_kernel_untar_build.yaml} | 3 | |
pass | 3288727 | 2018-11-28 05:23:15 | 2018-12-05 19:17:21 | 2018-12-05 21:23:22 | 2:06:01 | 0:20:46 | 1:45:15 | ovh | master | ubuntu | 16.04 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/backtrace.yaml whitelist_health.yaml} | 6 | |
fail | 3288728 | 2018-11-28 05:23:15 | 2018-12-05 19:17:50 | 2018-12-05 19:39:50 | 0:22:00 | ovh | master | ubuntu | 18.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/kclient_workunit_misc.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh069.front.sepia.ceph.com |
||||||||||||||
fail | 3288729 | 2018-11-28 05:23:16 | 2018-12-05 19:20:11 | 2018-12-05 19:36:10 | 0:15:59 | ovh | master | ubuntu | 18.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/kclient_workunit_o_trunc.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh016.front.sepia.ceph.com |
||||||||||||||
pass | 3288730 | 2018-11-28 05:23:17 | 2018-12-05 19:24:49 | 2018-12-05 23:10:52 | 3:46:03 | 0:28:46 | 3:17:17 | ovh | master | ubuntu | 16.04 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/client-limits.yaml whitelist_health.yaml} | 6 | |
pass | 3288731 | 2018-11-28 05:23:18 | 2018-12-05 19:30:51 | 2018-12-05 21:12:52 | 1:42:01 | 1:29:05 | 0:12:56 | ovh | master | ubuntu | 16.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/kclient_workunit_snaps.yaml} | 3 | |
pass | 3288732 | 2018-11-28 05:23:18 | 2018-12-05 19:30:51 | 2018-12-05 20:26:51 | 0:56:00 | 0:32:29 | 0:23:31 | ovh | master | centos | 7.4 | kcephfs/thrash/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} thrashers/mds.yaml thrashosds-health.yaml whitelist_health.yaml workloads/kclient_workunit_suites_iozone.yaml} | 3 | |
dead | 3288733 | 2018-11-28 05:23:19 | 2018-12-05 19:32:07 | 2018-12-06 07:34:19 | 12:02:12 | ovh | master | rhel | 7.5 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/client-recovery.yaml whitelist_health.yaml} | — | |||
pass | 3288734 | 2018-11-28 05:23:20 | 2018-12-05 19:36:22 | 2018-12-05 20:36:22 | 1:00:00 | 0:40:50 | 0:19:10 | ovh | master | centos | 7.4 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/kclient_workunit_suites_dbench.yaml} | 3 | |
fail | 3288735 | 2018-11-28 05:23:21 | 2018-12-05 19:40:01 | 2018-12-05 19:58:00 | 0:17:59 | ovh | master | ubuntu | 18.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/kclient_workunit_suites_ffsb.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh059.front.sepia.ceph.com |
||||||||||||||
dead | 3288736 | 2018-11-28 05:23:21 | 2018-12-05 19:42:36 | 2018-12-06 07:44:47 | 12:02:11 | ovh | master | ubuntu | 18.04 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/config-commands.yaml whitelist_health.yaml} | — | |||
fail | 3288737 | 2018-11-28 05:23:22 | 2018-12-05 19:51:45 | 2018-12-05 20:59:45 | 1:08:00 | 0:08:58 | 0:59:02 | ovh | master | rhel | 7.5 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/kclient_workunit_suites_fsstress.yaml} | 3 | |
Failure Reason:
Command failed on ovh015 with status 1: '\n sudo yum -y install bison\n ' |
||||||||||||||
fail | 3288738 | 2018-11-28 05:23:23 | 2018-12-05 19:54:03 | 2018-12-05 20:52:03 | 0:58:00 | 0:10:44 | 0:47:16 | ovh | master | rhel | 7.5 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/kclient_workunit_suites_fsx.yaml} | 3 | |
Failure Reason:
Command failed on ovh048 with status 1: '\n sudo yum -y install bison\n ' |
||||||||||||||
pass | 3288739 | 2018-11-28 05:23:24 | 2018-12-05 19:58:03 | 2018-12-05 23:28:06 | 3:30:03 | 0:29:37 | 3:00:26 | ovh | master | ubuntu | 16.04 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/damage.yaml whitelist_health.yaml} | 6 | |
fail | 3288740 | 2018-11-28 05:23:24 | 2018-12-05 19:58:03 | 2018-12-05 20:22:03 | 0:24:00 | ovh | master | ubuntu | 18.04 | kcephfs/thrash/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} thrashers/mon.yaml thrashosds-health.yaml whitelist_health.yaml workloads/kclient_workunit_suites_ffsb.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh081.front.sepia.ceph.com |
||||||||||||||
pass | 3288741 | 2018-11-28 05:23:25 | 2018-12-05 19:58:16 | 2018-12-05 20:58:16 | 1:00:00 | 0:15:51 | 0:44:09 | ovh | master | ubuntu | 16.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/kclient_workunit_suites_fsync.yaml} | 3 | |
dead | 3288742 | 2018-11-28 05:23:26 | 2018-12-05 20:05:58 | 2018-12-06 08:08:10 | 12:02:12 | ovh | master | ubuntu | 18.04 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/data-scan.yaml whitelist_health.yaml} | — | |||
fail | 3288743 | 2018-11-28 05:23:27 | 2018-12-05 20:07:12 | 2018-12-05 20:21:12 | 0:14:00 | ovh | master | ubuntu | 18.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/kclient_workunit_suites_iozone.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh044.front.sepia.ceph.com |
||||||||||||||
fail | 3288744 | 2018-11-28 05:23:27 | 2018-12-05 20:14:31 | 2018-12-05 21:12:31 | 0:58:00 | 0:07:19 | 0:50:41 | ovh | master | rhel | 7.5 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/kclient_workunit_suites_pjd.yaml} | 3 | |
Failure Reason:
Command failed on ovh022 with status 1: '\n sudo yum -y install bison\n ' |
||||||||||||||
dead | 3288745 | 2018-11-28 05:23:28 | 2018-12-05 20:14:42 | 2018-12-06 08:16:58 | 12:02:16 | ovh | master | rhel | 7.5 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/failover.yaml whitelist_health.yaml} | — | |||
fail | 3288746 | 2018-11-28 05:23:29 | 2018-12-05 20:21:23 | 2018-12-05 21:21:23 | 1:00:00 | 0:08:26 | 0:51:34 | ovh | master | rhel | 7.5 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/kclient_workunit_trivial_sync.yaml} | 3 | |
Failure Reason:
Command failed on ovh068 with status 1: '\n sudo yum -y install bison\n ' |
||||||||||||||
pass | 3288747 | 2018-11-28 05:23:29 | 2018-12-05 20:22:04 | 2018-12-05 20:58:04 | 0:36:00 | 0:20:04 | 0:15:56 | ovh | master | ubuntu | 16.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/kclient_workunit_direct_io.yaml} | 3 | |
pass | 3288748 | 2018-11-28 05:23:30 | 2018-12-05 20:27:02 | 2018-12-05 21:53:03 | 1:26:01 | 1:01:10 | 0:24:51 | ovh | master | centos | 7.4 | kcephfs/mixed-clients/{begin.yaml clusters/1-mds-2-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/kernel_cfuse_workunits_untarbuild_blogbench.yaml} | 4 | |
dead | 3288749 | 2018-11-28 05:23:31 | 2018-12-05 20:29:44 | 2018-12-06 08:37:07 | 12:07:23 | ovh | master | rhel | 7.5 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/forward-scrub.yaml whitelist_health.yaml} | 6 | |||
fail | 3288750 | 2018-11-28 05:23:32 | 2018-12-05 20:32:34 | 2018-12-05 21:34:35 | 1:02:01 | 0:07:31 | 0:54:30 | ovh | master | rhel | 7.5 | kcephfs/thrash/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} thrashers/default.yaml thrashosds-health.yaml whitelist_health.yaml workloads/kclient_workunit_suites_iozone.yaml} | 3 | |
Failure Reason:
Command failed on ovh012 with status 1: '\n sudo yum -y install bison\n ' |
||||||||||||||
fail | 3288751 | 2018-11-28 05:23:32 | 2018-12-05 20:36:34 | 2018-12-05 21:30:34 | 0:54:00 | 0:07:49 | 0:46:11 | ovh | master | rhel | 7.5 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/kclient_workunit_kernel_untar_build.yaml} | 3 | |
Failure Reason:
Command failed on ovh092 with status 1: '\n sudo yum -y install bison\n ' |
||||||||||||||
dead | 3288752 | 2018-11-28 05:23:33 | 2018-12-05 20:47:20 | 2018-12-06 08:49:28 | 12:02:08 | ovh | master | ubuntu | 16.04 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/journal-repair.yaml whitelist_health.yaml} | — | |||
pass | 3288753 | 2018-11-28 05:23:34 | 2018-12-05 20:52:15 | 2018-12-05 22:14:16 | 1:22:01 | 1:11:36 | 0:10:25 | ovh | master | ubuntu | 16.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/kclient_workunit_misc.yaml} | 3 | |
fail | 3288754 | 2018-11-28 05:23:35 | 2018-12-05 20:52:26 | 2018-12-05 21:58:26 | 1:06:00 | 0:06:57 | 0:59:03 | ovh | master | rhel | 7.5 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/kclient_workunit_o_trunc.yaml} | 3 | |
Failure Reason:
Command failed on ovh087 with status 1: '\n sudo yum -y install bison\n ' |
||||||||||||||
fail | 3288755 | 2018-11-28 05:23:35 | 2018-12-05 20:52:51 | 2018-12-05 21:40:50 | 0:47:59 | ovh | master | ubuntu | 18.04 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/mds-flush.yaml whitelist_health.yaml} | 6 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh035.front.sepia.ceph.com |
||||||||||||||
fail | 3288756 | 2018-11-28 05:23:36 | 2018-12-05 20:58:16 | 2018-12-05 21:58:16 | 1:00:00 | 0:08:06 | 0:51:54 | ovh | master | rhel | 7.5 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/kclient_workunit_snaps.yaml} | 3 | |
Failure Reason:
Command failed on ovh003 with status 1: '\n sudo yum -y install bison\n ' |
||||||||||||||
fail | 3288757 | 2018-11-28 05:23:37 | 2018-12-05 20:58:17 | 2018-12-05 21:16:16 | 0:17:59 | ovh | master | ubuntu | 18.04 | kcephfs/thrash/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} thrashers/mds.yaml thrashosds-health.yaml whitelist_health.yaml workloads/kclient_workunit_suites_ffsb.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh080.front.sepia.ceph.com |
||||||||||||||
fail | 3288758 | 2018-11-28 05:23:38 | 2018-12-05 20:59:57 | 2018-12-05 23:01:58 | 2:02:01 | 0:23:14 | 1:38:47 | ovh | master | ubuntu | 16.04 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/mds-full.yaml whitelist_health.yaml} | 6 | |
Failure Reason:
Test failure: test_full_fclose (tasks.cephfs.test_full.TestClusterFull) |
||||||||||||||
pass | 3288759 | 2018-11-28 05:23:39 | 2018-12-05 21:03:55 | 2018-12-05 22:19:56 | 1:16:01 | 0:49:15 | 0:26:46 | ovh | master | ubuntu | 16.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/kclient_workunit_suites_dbench.yaml} | 3 | |
fail | 3288760 | 2018-11-28 05:23:39 | 2018-12-05 21:10:17 | 2018-12-05 22:14:17 | 1:04:00 | 0:42:29 | 0:21:31 | ovh | master | centos | 7.4 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/kclient_workunit_suites_ffsb.yaml} | 3 | |
Failure Reason:
"2018-12-05 21:49:14.335514 mon.a mon.0 158.69.65.194:6789/0 159 : cluster [WRN] Health check failed: 1 MDSs report slow metadata IOs (MDS_SLOW_METADATA_IO)" in cluster log |
||||||||||||||
pass | 3288761 | 2018-11-28 05:23:40 | 2018-12-05 21:12:43 | 2018-12-05 22:08:43 | 0:56:00 | 0:20:46 | 0:35:14 | ovh | master | ubuntu | 16.04 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/pool-perm.yaml whitelist_health.yaml} | 6 | |
pass | 3288762 | 2018-11-28 05:23:41 | 2018-12-05 21:12:53 | 2018-12-05 21:50:53 | 0:38:00 | 0:23:54 | 0:14:06 | ovh | master | centos | 7.4 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/kclient_workunit_suites_fsstress.yaml} | 3 | |
fail | 3288763 | 2018-11-28 05:23:42 | 2018-12-05 21:16:28 | 2018-12-05 22:10:28 | 0:54:00 | 0:07:44 | 0:46:16 | ovh | master | rhel | 7.5 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/kclient_workunit_suites_fsx.yaml} | 3 | |
Failure Reason:
Command failed on ovh080 with status 1: '\n sudo yum -y install bison\n ' |
||||||||||||||
dead | 3288764 | 2018-11-28 05:23:42 | 2018-12-05 21:21:35 | 2018-12-06 09:23:46 | 12:02:11 | ovh | master | centos | 7.4 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/sessionmap.yaml whitelist_health.yaml} | — | |||
pass | 3288765 | 2018-11-28 05:23:43 | 2018-12-05 21:23:34 | 2018-12-05 22:25:34 | 1:02:00 | 0:31:12 | 0:30:48 | ovh | master | ubuntu | 16.04 | kcephfs/thrash/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} thrashers/mon.yaml thrashosds-health.yaml whitelist_health.yaml workloads/kclient_workunit_suites_iozone.yaml} | 3 | |
fail | 3288766 | 2018-11-28 05:23:44 | 2018-12-05 21:27:45 | 2018-12-05 22:19:45 | 0:52:00 | 0:07:35 | 0:44:25 | ovh | master | rhel | 7.5 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/kclient_workunit_suites_fsync.yaml} | 3 | |
Failure Reason:
Command failed on ovh078 with status 1: '\n sudo yum -y install bison\n ' |
||||||||||||||
fail | 3288767 | 2018-11-28 05:23:45 | 2018-12-05 21:30:46 | 2018-12-06 00:08:47 | 2:38:01 | ovh | master | ubuntu | 18.04 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/strays.yaml whitelist_health.yaml} | 6 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh022.front.sepia.ceph.com |
||||||||||||||
pass | 3288768 | 2018-11-28 05:23:45 | 2018-12-05 21:33:20 | 2018-12-05 22:43:20 | 1:10:00 | 0:32:02 | 0:37:58 | ovh | master | centos | 7.4 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/kclient_workunit_suites_iozone.yaml} | 3 | |
pass | 3288769 | 2018-11-28 05:23:46 | 2018-12-05 21:34:46 | 2018-12-05 22:30:46 | 0:56:00 | 0:19:08 | 0:36:52 | ovh | master | centos | 7.4 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/kclient_workunit_suites_pjd.yaml} | 3 | |
pass | 3288770 | 2018-11-28 05:23:47 | 2018-12-05 21:40:52 | 2018-12-05 23:54:53 | 2:14:01 | 0:41:29 | 1:32:32 | ovh | master | centos | 7.4 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/volume-client.yaml whitelist_health.yaml} | 6 | |
pass | 3288771 | 2018-11-28 05:23:48 | 2018-12-05 21:45:55 | 2018-12-05 22:19:55 | 0:34:00 | 0:16:09 | 0:17:51 | ovh | master | centos | 7.4 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/kclient_workunit_trivial_sync.yaml} | 3 |