User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Pass | Fail | Dead |
---|---|---|---|---|---|---|---|---|---|---|---|
teuthology | 2018-11-19 05:20:02 | 2018-11-30 16:06:32 | 2018-12-01 23:59:49 | 1 day, 7:53:17 | kcephfs | mimic | ovh | 4bfd25a | 102 | 122 | 26 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
pass | 3269571 | 2018-11-19 05:20:30 | 2018-11-30 16:06:32 | 2018-11-30 16:56:32 | 0:50:00 | 0:25:12 | 0:24:48 | ovh | master | ubuntu | 16.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/kclient_workunit_direct_io.yaml} | 3 | |
pass | 3269572 | 2018-11-19 05:20:30 | 2018-11-30 16:06:38 | 2018-11-30 19:42:41 | 3:36:03 | 0:54:14 | 2:41:49 | ovh | master | centos | 7.4 | kcephfs/mixed-clients/{begin.yaml clusters/1-mds-2-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/kernel_cfuse_workunits_dbench_iozone.yaml} | 4 | |
pass | 3269573 | 2018-11-19 05:20:31 | 2018-11-30 16:10:44 | 2018-11-30 22:06:49 | 5:56:05 | 0:21:09 | 5:34:56 | ovh | master | centos | 7.4 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/auto-repair.yaml whitelist_health.yaml} | 6 | |
fail | 3269574 | 2018-11-19 05:20:32 | 2018-11-30 16:14:52 | 2018-11-30 17:54:53 | 1:40:01 | 1:04:58 | 0:35:03 | ovh | master | ubuntu | 16.04 | kcephfs/thrash/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} thrashers/default.yaml thrashosds-health.yaml whitelist_health.yaml workloads/kclient_workunit_suites_ffsb.yaml} | 3 | |
Failure Reason:
"2018-11-30 17:17:18.088609 mon.a mon.0 158.69.73.170:6789/0 2092 : cluster [WRN] Health check failed: 1 MDSs report slow metadata IOs (MDS_SLOW_METADATA_IO)" in cluster log |
||||||||||||||
fail | 3269575 | 2018-11-19 05:20:33 | 2018-11-30 16:16:48 | 2018-11-30 17:32:49 | 1:16:01 | 0:06:57 | 1:09:04 | ovh | master | rhel | 7.5 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/kclient_workunit_kernel_untar_build.yaml} | 3 | |
Failure Reason:
Command failed on ovh036 with status 1: '\n sudo yum -y install ceph-radosgw\n ' |
||||||||||||||
fail | 3269576 | 2018-11-19 05:20:34 | 2018-11-30 16:24:54 | 2018-12-01 02:21:04 | 9:56:10 | 0:08:24 | 9:47:46 | ovh | master | rhel | 7.5 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/backtrace.yaml whitelist_health.yaml} | 6 | |
Failure Reason:
Command failed on ovh064 with status 1: '\n sudo yum -y install ceph-radosgw\n ' |
||||||||||||||
fail | 3269577 | 2018-11-19 05:20:34 | 2018-11-30 16:36:52 | 2018-11-30 18:20:53 | 1:44:01 | 1:16:19 | 0:27:42 | ovh | master | ubuntu | 16.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/kclient_workunit_misc.yaml} | 3 | |
Failure Reason:
"2018-11-30 17:55:49.285199 mon.a mon.0 158.69.67.21:6789/0 271 : cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log |
||||||||||||||
fail | 3269578 | 2018-11-19 05:20:35 | 2018-11-30 16:40:48 | 2018-11-30 16:56:47 | 0:15:59 | ovh | master | ubuntu | 18.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/kclient_workunit_o_trunc.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh012.front.sepia.ceph.com |
||||||||||||||
pass | 3269579 | 2018-11-19 05:20:36 | 2018-11-30 16:43:01 | 2018-11-30 23:39:12 | 6:56:11 | 0:29:44 | 6:26:27 | ovh | master | ubuntu | 16.04 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/client-limits.yaml whitelist_health.yaml} | 6 | |
fail | 3269580 | 2018-11-19 05:20:37 | 2018-11-30 16:48:50 | 2018-11-30 18:48:51 | 2:00:01 | 1:29:18 | 0:30:43 | ovh | master | ubuntu | 16.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/kclient_workunit_snaps.yaml} | 3 | |
Failure Reason:
"2018-11-30 17:19:03.833731 mon.a mon.0 158.69.72.25:6789/0 284 : cluster [WRN] Health check failed: 1 slow ops, oldest one blocked for 122 sec, mon.c has slow ops (SLOW_OPS)" in cluster log |
||||||||||||||
pass | 3269581 | 2018-11-19 05:20:37 | 2018-11-30 16:48:54 | 2018-11-30 18:12:55 | 1:24:01 | 0:36:11 | 0:47:50 | ovh | master | centos | 7.4 | kcephfs/thrash/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} thrashers/mds.yaml thrashosds-health.yaml whitelist_health.yaml workloads/kclient_workunit_suites_iozone.yaml} | 3 | |
fail | 3269582 | 2018-11-19 05:20:38 | 2018-11-30 16:51:03 | 2018-11-30 21:13:07 | 4:22:04 | 0:08:05 | 4:13:59 | ovh | master | rhel | 7.5 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/client-recovery.yaml whitelist_health.yaml} | 6 | |
Failure Reason:
Command failed on ovh045 with status 1: '\n sudo yum -y install ceph-radosgw\n ' |
||||||||||||||
fail | 3269583 | 2018-11-19 05:20:39 | 2018-11-30 16:52:49 | 2018-11-30 17:14:49 | 0:22:00 | ovh | master | ubuntu | 18.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/kclient_workunit_suites_dbench.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh038.front.sepia.ceph.com |
||||||||||||||
fail | 3269584 | 2018-11-19 05:20:40 | 2018-11-30 16:56:35 | 2018-11-30 18:22:36 | 1:26:01 | 0:06:30 | 1:19:31 | ovh | master | rhel | 7.5 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/kclient_workunit_suites_ffsb.yaml} | 3 | |
Failure Reason:
Command failed on ovh059 with status 1: '\n sudo yum -y install ceph-radosgw\n ' |
||||||||||||||
fail | 3269585 | 2018-11-19 05:20:40 | 2018-11-30 16:56:48 | 2018-12-01 00:04:55 | 7:08:07 | 0:08:27 | 6:59:40 | ovh | master | rhel | 7.5 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/config-commands.yaml whitelist_health.yaml} | 6 | |
Failure Reason:
Command failed on ovh090 with status 1: '\n sudo yum -y install ceph-radosgw\n ' |
||||||||||||||
fail | 3269586 | 2018-11-19 05:20:41 | 2018-11-30 17:02:42 | 2018-11-30 18:00:42 | 0:58:00 | 0:39:09 | 0:18:51 | ovh | master | centos | 7.4 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/kclient_workunit_suites_fsstress.yaml} | 3 | |
Failure Reason:
"2018-11-30 17:47:16.576783 mon.a mon.0 158.69.64.133:6789/0 273 : cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log |
||||||||||||||
fail | 3269587 | 2018-11-19 05:20:42 | 2018-11-30 17:04:55 | 2018-11-30 17:32:55 | 0:28:00 | ovh | master | ubuntu | 18.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/kclient_workunit_suites_fsx.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh034.front.sepia.ceph.com |
||||||||||||||
pass | 3269588 | 2018-11-19 05:20:43 | 2018-11-30 17:06:56 | 2018-11-30 23:57:02 | 6:50:06 | 0:41:29 | 6:08:37 | ovh | master | centos | 7.4 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/damage.yaml whitelist_health.yaml} | 6 | |
fail | 3269589 | 2018-11-19 05:20:43 | 2018-11-30 17:14:53 | 2018-11-30 18:20:53 | 1:06:00 | 0:06:46 | 0:59:14 | ovh | master | rhel | 7.5 | kcephfs/thrash/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} thrashers/mon.yaml thrashosds-health.yaml whitelist_health.yaml workloads/kclient_workunit_suites_ffsb.yaml} | 3 | |
Failure Reason:
Command failed on ovh048 with status 1: '\n sudo yum -y install ceph-radosgw\n ' |
||||||||||||||
pass | 3269590 | 2018-11-19 05:20:44 | 2018-11-30 17:18:55 | 2018-11-30 18:16:55 | 0:58:00 | 0:20:40 | 0:37:20 | ovh | master | centos | 7.4 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/kclient_workunit_suites_fsync.yaml} | 3 | |
pass | 3269591 | 2018-11-19 05:20:45 | 2018-11-30 17:20:50 | 2018-11-30 21:34:54 | 4:14:04 | 0:40:34 | 3:33:30 | ovh | master | centos | 7.4 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/data-scan.yaml whitelist_health.yaml} | 6 | |
fail | 3269592 | 2018-11-19 05:20:46 | 2018-11-30 17:25:03 | 2018-11-30 17:41:03 | 0:16:00 | ovh | master | ubuntu | 18.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/kclient_workunit_suites_iozone.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh070.front.sepia.ceph.com |
||||||||||||||
pass | 3269593 | 2018-11-19 05:20:47 | 2018-11-30 17:28:45 | 2018-11-30 18:20:45 | 0:52:00 | 0:25:06 | 0:26:54 | ovh | master | centos | 7.4 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/kclient_workunit_suites_pjd.yaml} | 3 | |
pass | 3269594 | 2018-11-19 05:20:47 | 2018-11-30 17:30:52 | 2018-12-01 01:26:59 | 7:56:07 | 0:36:14 | 7:19:53 | ovh | master | ubuntu | 16.04 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/failover.yaml whitelist_health.yaml} | 6 | |
fail | 3269595 | 2018-11-19 05:20:48 | 2018-11-30 17:33:02 | 2018-11-30 18:29:02 | 0:56:00 | 0:06:40 | 0:49:20 | ovh | master | rhel | 7.5 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/kclient_workunit_trivial_sync.yaml} | 3 | |
Failure Reason:
Command failed on ovh096 with status 1: '\n sudo yum -y install ceph-radosgw\n ' |
||||||||||||||
fail | 3269596 | 2018-11-19 05:20:49 | 2018-11-30 17:33:02 | 2018-11-30 18:27:02 | 0:54:00 | 0:06:33 | 0:47:27 | ovh | master | rhel | 7.5 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/kclient_workunit_direct_io.yaml} | 3 | |
Failure Reason:
Command failed on ovh036 with status 1: '\n sudo yum -y install ceph-radosgw\n ' |
||||||||||||||
pass | 3269597 | 2018-11-19 05:20:50 | 2018-11-30 17:35:01 | 2018-11-30 19:01:02 | 1:26:01 | 1:09:40 | 0:16:21 | ovh | master | ubuntu | 16.04 | kcephfs/mixed-clients/{begin.yaml clusters/1-mds-2-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/kernel_cfuse_workunits_untarbuild_blogbench.yaml} | 4 | |
pass | 3269598 | 2018-11-19 05:20:50 | 2018-11-30 17:40:50 | 2018-11-30 23:10:55 | 5:30:05 | 0:24:46 | 5:05:19 | ovh | master | ubuntu | 16.04 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/forward-scrub.yaml whitelist_health.yaml} | 6 | |
fail | 3269599 | 2018-11-19 05:20:51 | 2018-11-30 17:41:04 | 2018-11-30 18:07:03 | 0:25:59 | ovh | master | ubuntu | 18.04 | kcephfs/thrash/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} thrashers/default.yaml thrashosds-health.yaml whitelist_health.yaml workloads/kclient_workunit_suites_iozone.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh072.front.sepia.ceph.com |
||||||||||||||
pass | 3269600 | 2018-11-19 05:20:52 | 2018-11-30 17:44:01 | 2018-11-30 19:12:02 | 1:28:01 | 0:57:42 | 0:30:19 | ovh | master | centos | 7.4 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/kclient_workunit_kernel_untar_build.yaml} | 3 | |
fail | 3269601 | 2018-11-19 05:20:53 | 2018-11-30 17:47:06 | 2018-12-01 00:55:12 | 7:08:06 | 0:40:26 | 6:27:40 | ovh | master | centos | 7.4 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/journal-repair.yaml whitelist_health.yaml} | 6 | |
Failure Reason:
Test failure: test_reset (tasks.cephfs.test_journal_repair.TestJournalRepair), test_reset (tasks.cephfs.test_journal_repair.TestJournalRepair) |
||||||||||||||
fail | 3269602 | 2018-11-19 05:20:53 | 2018-11-30 17:55:05 | 2018-11-30 18:17:05 | 0:22:00 | ovh | master | ubuntu | 18.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/kclient_workunit_misc.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh093.front.sepia.ceph.com |
||||||||||||||
pass | 3269603 | 2018-11-19 05:20:54 | 2018-11-30 18:00:56 | 2018-11-30 18:54:56 | 0:54:00 | 0:30:39 | 0:23:21 | ovh | master | ubuntu | 16.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/kclient_workunit_o_trunc.yaml} | 3 | |
fail | 3269604 | 2018-11-19 05:20:55 | 2018-11-30 18:03:00 | 2018-11-30 19:39:01 | 1:36:01 | ovh | master | ubuntu | 18.04 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/mds-flush.yaml whitelist_health.yaml} | 6 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh056.front.sepia.ceph.com |
||||||||||||||
fail | 3269605 | 2018-11-19 05:20:56 | 2018-11-30 18:07:03 | 2018-11-30 18:55:03 | 0:48:00 | 0:06:42 | 0:41:18 | ovh | master | rhel | 7.5 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/kclient_workunit_snaps.yaml} | 3 | |
Failure Reason:
Command failed on ovh073 with status 1: '\n sudo yum -y install ceph-radosgw\n ' |
||||||||||||||
fail | 3269606 | 2018-11-19 05:20:56 | 2018-11-30 18:07:04 | 2018-11-30 19:15:05 | 1:08:01 | 0:06:47 | 1:01:14 | ovh | master | rhel | 7.5 | kcephfs/thrash/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} thrashers/mds.yaml thrashosds-health.yaml whitelist_health.yaml workloads/kclient_workunit_suites_ffsb.yaml} | 3 | |
Failure Reason:
Command failed on ovh080 with status 1: '\n sudo yum -y install ceph-radosgw\n ' |
||||||||||||||
fail | 3269607 | 2018-11-19 05:20:57 | 2018-11-30 18:13:09 | 2018-11-30 19:49:10 | 1:36:01 | ovh | master | ubuntu | 18.04 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/mds-full.yaml whitelist_health.yaml} | 6 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh070.front.sepia.ceph.com |
||||||||||||||
fail | 3269608 | 2018-11-19 05:20:58 | 2018-11-30 18:16:58 | 2018-11-30 18:44:58 | 0:28:00 | ovh | master | ubuntu | 18.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/kclient_workunit_suites_dbench.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh001.front.sepia.ceph.com |
||||||||||||||
fail | 3269609 | 2018-11-19 05:20:59 | 2018-11-30 18:17:06 | 2018-11-30 19:21:06 | 1:04:00 | 0:44:10 | 0:19:50 | ovh | master | centos | 7.4 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/kclient_workunit_suites_ffsb.yaml} | 3 | |
Failure Reason:
"2018-11-30 19:09:39.947312 mon.b mon.0 158.69.66.243:6789/0 148 : cluster [WRN] Health check failed: 1 MDSs report slow metadata IOs (MDS_SLOW_METADATA_IO)" in cluster log |
||||||||||||||
pass | 3269610 | 2018-11-19 05:20:59 | 2018-11-30 18:18:47 | 2018-12-01 06:16:59 | 11:58:12 | 0:21:30 | 11:36:42 | ovh | master | ubuntu | 16.04 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/pool-perm.yaml whitelist_health.yaml} | 6 | |
fail | 3269611 | 2018-11-19 05:21:00 | 2018-11-30 18:20:48 | 2018-11-30 18:40:47 | 0:19:59 | ovh | master | ubuntu | 18.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/kclient_workunit_suites_fsstress.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh016.front.sepia.ceph.com |
||||||||||||||
fail | 3269612 | 2018-11-19 05:21:01 | 2018-11-30 18:20:54 | 2018-11-30 19:26:54 | 1:06:00 | 0:06:44 | 0:59:16 | ovh | master | rhel | 7.5 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/kclient_workunit_suites_fsx.yaml} | 3 | |
Failure Reason:
Command failed on ovh096 with status 1: '\n sudo yum -y install ceph-radosgw\n ' |
||||||||||||||
pass | 3269613 | 2018-11-19 05:21:02 | 2018-11-30 18:20:54 | 2018-11-30 22:16:58 | 3:56:04 | 0:23:23 | 3:32:41 | ovh | master | centos | 7.4 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/sessionmap.yaml whitelist_health.yaml} | 6 | |
fail | 3269614 | 2018-11-19 05:21:03 | 2018-11-30 18:22:39 | 2018-11-30 19:24:39 | 1:02:00 | 0:06:43 | 0:55:17 | ovh | master | rhel | 7.5 | kcephfs/thrash/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} thrashers/mon.yaml thrashosds-health.yaml whitelist_health.yaml workloads/kclient_workunit_suites_iozone.yaml} | 3 | |
Failure Reason:
Command failed on ovh070 with status 1: '\n sudo yum -y install ceph-radosgw\n ' |
||||||||||||||
pass | 3269615 | 2018-11-19 05:21:03 | 2018-11-30 18:24:51 | 2018-11-30 19:10:51 | 0:46:00 | 0:20:54 | 0:25:06 | ovh | master | centos | 7.4 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/kclient_workunit_suites_fsync.yaml} | 3 | |
fail | 3269616 | 2018-11-19 05:21:04 | 2018-11-30 18:27:08 | 2018-12-01 00:59:19 | 6:32:11 | ovh | master | ubuntu | 18.04 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/strays.yaml whitelist_health.yaml} | 6 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh059.front.sepia.ceph.com |
||||||||||||||
fail | 3269617 | 2018-11-19 05:21:05 | 2018-11-30 18:29:12 | 2018-11-30 18:47:11 | 0:17:59 | ovh | master | ubuntu | 18.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/kclient_workunit_suites_iozone.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh052.front.sepia.ceph.com |
||||||||||||||
fail | 3269618 | 2018-11-19 05:21:05 | 2018-11-30 18:32:40 | 2018-11-30 18:52:39 | 0:19:59 | ovh | master | ubuntu | 18.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/kclient_workunit_suites_pjd.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh024.front.sepia.ceph.com |
||||||||||||||
dead | 3269619 | 2018-11-19 05:21:06 | 2018-11-30 18:32:40 | 2018-12-01 06:34:52 | 12:02:12 | ovh | master | rhel | 7.5 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/volume-client.yaml whitelist_health.yaml} | — | |||
pass | 3269620 | 2018-11-19 05:21:07 | 2018-11-30 18:33:21 | 2018-11-30 19:03:21 | 0:30:00 | 0:18:24 | 0:11:36 | ovh | master | ubuntu | 16.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/kclient_workunit_trivial_sync.yaml} | 3 | |
fail | 3269621 | 2018-11-19 05:21:08 | 2018-11-30 18:41:01 | 2018-11-30 19:37:02 | 0:56:01 | 0:06:31 | 0:49:30 | ovh | master | rhel | 7.5 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/kclient_workunit_direct_io.yaml} | 3 | |
Failure Reason:
Command failed on ovh059 with status 1: '\n sudo yum -y install ceph-radosgw\n ' |
||||||||||||||
pass | 3269622 | 2018-11-19 05:21:09 | 2018-11-30 18:42:53 | 2018-11-30 20:14:53 | 1:32:00 | 0:53:00 | 0:39:00 | ovh | master | ubuntu | 16.04 | kcephfs/mixed-clients/{begin.yaml clusters/1-mds-2-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/kernel_cfuse_workunits_dbench_iozone.yaml} | 4 | |
pass | 3269623 | 2018-11-19 05:21:09 | 2018-11-30 18:45:01 | 2018-12-01 03:09:09 | 8:24:08 | 0:23:38 | 8:00:30 | ovh | master | centos | 7.4 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/auto-repair.yaml whitelist_health.yaml} | 6 | |
fail | 3269624 | 2018-11-19 05:21:10 | 2018-11-30 18:47:15 | 2018-11-30 19:43:15 | 0:56:00 | 0:06:52 | 0:49:08 | ovh | master | rhel | 7.5 | kcephfs/thrash/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} thrashers/default.yaml thrashosds-health.yaml whitelist_health.yaml workloads/kclient_workunit_suites_ffsb.yaml} | 3 | |
Failure Reason:
Command failed on ovh072 with status 1: '\n sudo yum -y install ceph-radosgw\n ' |
||||||||||||||
pass | 3269625 | 2018-11-19 05:21:11 | 2018-11-30 18:49:04 | 2018-11-30 20:45:05 | 1:56:01 | 1:04:38 | 0:51:23 | ovh | master | centos | 7.4 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/kclient_workunit_kernel_untar_build.yaml} | 3 | |
pass | 3269626 | 2018-11-19 05:21:12 | 2018-11-30 18:50:57 | 2018-12-01 01:31:03 | 6:40:06 | 0:21:26 | 6:18:40 | ovh | master | ubuntu | 16.04 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/backtrace.yaml whitelist_health.yaml} | 6 | |
pass | 3269627 | 2018-11-19 05:21:12 | 2018-11-30 18:52:54 | 2018-11-30 20:16:55 | 1:24:01 | 0:56:02 | 0:27:59 | ovh | master | centos | 7.4 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/kclient_workunit_misc.yaml} | 3 | |
fail | 3269628 | 2018-11-19 05:21:13 | 2018-11-30 18:55:11 | 2018-11-30 19:13:10 | 0:17:59 | ovh | master | ubuntu | 18.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/kclient_workunit_o_trunc.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh011.front.sepia.ceph.com |
||||||||||||||
pass | 3269629 | 2018-11-19 05:21:14 | 2018-11-30 18:55:11 | 2018-12-01 00:15:16 | 5:20:05 | 0:27:28 | 4:52:37 | ovh | master | ubuntu | 16.04 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/client-limits.yaml whitelist_health.yaml} | 6 | |
fail | 3269630 | 2018-11-19 05:21:15 | 2018-11-30 19:01:17 | 2018-11-30 19:49:17 | 0:48:00 | 0:06:49 | 0:41:11 | ovh | master | rhel | 7.5 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/kclient_workunit_snaps.yaml} | 3 | |
Failure Reason:
Command failed on ovh001 with status 1: '\n sudo yum -y install ceph-radosgw\n ' |
||||||||||||||
fail | 3269631 | 2018-11-19 05:21:16 | 2018-11-30 19:03:24 | 2018-11-30 22:39:27 | 3:36:03 | 3:20:57 | 0:15:06 | ovh | master | centos | 7.4 | kcephfs/thrash/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} thrashers/mds.yaml thrashosds-health.yaml whitelist_health.yaml workloads/kclient_workunit_suites_iozone.yaml} | 3 | |
Failure Reason:
Command failed (workunit test suites/iozone.sh) on ovh016 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=4bfd25addd77b8ea785c8b84a073cff0c477e906 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/iozone.sh' |
||||||||||||||
dead | 3269632 | 2018-11-19 05:21:16 | 2018-11-30 19:10:40 | 2018-12-01 07:13:23 | 12:02:43 | ovh | master | centos | 7.4 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/client-recovery.yaml whitelist_health.yaml} | 6 | |||
pass | 3269633 | 2018-11-19 05:21:17 | 2018-11-30 19:10:53 | 2018-11-30 20:16:53 | 1:06:00 | 0:47:43 | 0:18:17 | ovh | master | ubuntu | 16.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/kclient_workunit_suites_dbench.yaml} | 3 | |
fail | 3269634 | 2018-11-19 05:21:18 | 2018-11-30 19:12:17 | 2018-11-30 19:30:17 | 0:18:00 | ovh | master | ubuntu | 18.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/kclient_workunit_suites_ffsb.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh049.front.sepia.ceph.com |
||||||||||||||
pass | 3269635 | 2018-11-19 05:21:19 | 2018-11-30 19:13:11 | 2018-11-30 20:27:12 | 1:14:01 | 0:23:36 | 0:50:25 | ovh | master | centos | 7.4 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/config-commands.yaml whitelist_health.yaml} | 6 | |
fail | 3269636 | 2018-11-19 05:21:19 | 2018-11-30 19:15:15 | 2018-11-30 20:11:15 | 0:56:00 | 0:06:22 | 0:49:38 | ovh | master | rhel | 7.5 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/kclient_workunit_suites_fsstress.yaml} | 3 | |
Failure Reason:
Command failed on ovh040 with status 1: '\n sudo yum -y install ceph-radosgw\n ' |
||||||||||||||
pass | 3269637 | 2018-11-19 05:21:20 | 2018-11-30 19:21:20 | 2018-11-30 20:11:20 | 0:50:00 | 0:38:20 | 0:11:40 | ovh | master | ubuntu | 16.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/kclient_workunit_suites_fsx.yaml} | 3 | |
dead | 3269638 | 2018-11-19 05:21:21 | 2018-11-30 19:24:53 | 2018-12-01 07:27:11 | 12:02:18 | ovh | master | centos | 7.4 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/damage.yaml whitelist_health.yaml} | — | |||
fail | 3269639 | 2018-11-19 05:21:22 | 2018-11-30 19:27:04 | 2018-11-30 19:49:03 | 0:21:59 | ovh | master | ubuntu | 18.04 | kcephfs/thrash/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} thrashers/mon.yaml thrashosds-health.yaml whitelist_health.yaml workloads/kclient_workunit_suites_ffsb.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh059.front.sepia.ceph.com |
||||||||||||||
fail | 3269640 | 2018-11-19 05:21:23 | 2018-11-30 19:30:20 | 2018-11-30 19:50:19 | 0:19:59 | ovh | master | ubuntu | 18.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/kclient_workunit_suites_fsync.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh095.front.sepia.ceph.com |
||||||||||||||
pass | 3269641 | 2018-11-19 05:21:23 | 2018-11-30 19:32:39 | 2018-11-30 22:26:41 | 2:54:02 | 0:39:28 | 2:14:34 | ovh | master | centos | 7.4 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/data-scan.yaml whitelist_health.yaml} | 6 | |
fail | 3269642 | 2018-11-19 05:21:24 | 2018-11-30 19:37:16 | 2018-11-30 20:49:17 | 1:12:01 | 0:50:59 | 0:21:02 | ovh | master | centos | 7.4 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/kclient_workunit_suites_iozone.yaml} | 3 | |
Failure Reason:
"2018-11-30 20:28:36.766360 mon.b mon.0 158.69.69.118:6789/0 136 : cluster [WRN] Health check failed: 1 MDSs report slow metadata IOs (MDS_SLOW_METADATA_IO)" in cluster log |
||||||||||||||
fail | 3269643 | 2018-11-19 05:21:25 | 2018-11-30 19:39:15 | 2018-11-30 20:07:15 | 0:28:00 | ovh | master | ubuntu | 18.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/kclient_workunit_suites_pjd.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh050.front.sepia.ceph.com |
||||||||||||||
pass | 3269644 | 2018-11-19 05:21:26 | 2018-11-30 19:42:42 | 2018-11-30 22:28:44 | 2:46:02 | 0:38:07 | 2:07:55 | ovh | master | ubuntu | 16.04 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/failover.yaml whitelist_health.yaml} | 6 | |
fail | 3269645 | 2018-11-19 05:21:27 | 2018-11-30 19:43:29 | 2018-11-30 20:33:29 | 0:50:00 | 0:06:42 | 0:43:18 | ovh | master | rhel | 7.5 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/kclient_workunit_trivial_sync.yaml} | 3 | |
Failure Reason:
Command failed on ovh070 with status 1: '\n sudo yum -y install ceph-radosgw\n ' |
||||||||||||||
fail | 3269646 | 2018-11-19 05:21:27 | 2018-11-30 19:49:18 | 2018-11-30 20:41:18 | 0:52:00 | 0:06:36 | 0:45:24 | ovh | master | rhel | 7.5 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/kclient_workunit_direct_io.yaml} | 3 | |
Failure Reason:
Command failed on ovh012 with status 1: '\n sudo yum -y install ceph-radosgw\n ' |
||||||||||||||
fail | 3269647 | 2018-11-19 05:21:28 | 2018-11-30 19:49:18 | 2018-11-30 20:25:18 | 0:36:00 | ovh | master | ubuntu | 18.04 | kcephfs/mixed-clients/{begin.yaml clusters/1-mds-2-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/kernel_cfuse_workunits_untarbuild_blogbench.yaml} | 4 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh050.front.sepia.ceph.com |
||||||||||||||
dead | 3269648 | 2018-11-19 05:21:29 | 2018-11-30 19:49:18 | 2018-12-01 07:51:35 | 12:02:17 | ovh | master | rhel | 7.5 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/forward-scrub.yaml whitelist_health.yaml} | — | |||
pass | 3269649 | 2018-11-19 05:21:30 | 2018-11-30 19:50:33 | 2018-11-30 20:56:33 | 1:06:00 | 0:37:33 | 0:28:27 | ovh | master | centos | 7.4 | kcephfs/thrash/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} thrashers/default.yaml thrashosds-health.yaml whitelist_health.yaml workloads/kclient_workunit_suites_iozone.yaml} | 3 | |
pass | 3269650 | 2018-11-19 05:21:30 | 2018-11-30 19:53:04 | 2018-11-30 21:27:05 | 1:34:01 | 1:12:35 | 0:21:26 | ovh | master | ubuntu | 16.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/kclient_workunit_kernel_untar_build.yaml} | 3 | |
dead | 3269651 | 2018-11-19 05:21:31 | 2018-11-30 20:07:28 | 2018-12-01 08:09:40 | 12:02:12 | ovh | master | centos | 7.4 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/journal-repair.yaml whitelist_health.yaml} | — | |||
fail | 3269652 | 2018-11-19 05:21:32 | 2018-11-30 20:11:29 | 2018-11-30 20:29:29 | 0:18:00 | ovh | master | ubuntu | 18.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/kclient_workunit_misc.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh086.front.sepia.ceph.com |
||||||||||||||
pass | 3269653 | 2018-11-19 05:21:33 | 2018-11-30 20:11:29 | 2018-11-30 21:11:30 | 1:00:01 | 0:28:50 | 0:31:11 | ovh | master | ubuntu | 16.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/kclient_workunit_o_trunc.yaml} | 3 | |
fail | 3269654 | 2018-11-19 05:21:34 | 2018-11-30 20:15:07 | 2018-11-30 23:55:10 | 3:40:03 | ovh | master | ubuntu | 18.04 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/mds-flush.yaml whitelist_health.yaml} | 6 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh002.front.sepia.ceph.com |
||||||||||||||
fail | 3269655 | 2018-11-19 05:21:34 | 2018-11-30 20:17:07 | 2018-11-30 22:07:08 | 1:50:01 | 1:28:18 | 0:21:43 | ovh | master | centos | 7.4 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/kclient_workunit_snaps.yaml} | 3 | |
Failure Reason:
"2018-11-30 21:41:22.690594 mon.a mon.0 158.69.69.133:6789/0 527 : cluster [ERR] Health check failed: mon a is very low on available space (MON_DISK_CRIT)" in cluster log |
||||||||||||||
fail | 3269656 | 2018-11-19 05:21:35 | 2018-11-30 20:17:07 | 2018-11-30 21:13:07 | 0:56:00 | 0:06:43 | 0:49:17 | ovh | master | rhel | 7.5 | kcephfs/thrash/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} thrashers/mds.yaml thrashosds-health.yaml whitelist_health.yaml workloads/kclient_workunit_suites_ffsb.yaml} | 3 | |
Failure Reason:
Command failed on ovh050 with status 1: '\n sudo yum -y install ceph-radosgw\n ' |
||||||||||||||
dead | 3269657 | 2018-11-19 05:21:36 | 2018-11-30 20:25:21 | 2018-12-01 08:27:33 | 12:02:12 | ovh | master | ubuntu | 18.04 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/mds-full.yaml whitelist_health.yaml} | — | |||
pass | 3269658 | 2018-11-19 05:21:37 | 2018-11-30 20:27:25 | 2018-11-30 21:45:26 | 1:18:01 | 0:46:26 | 0:31:35 | ovh | master | ubuntu | 16.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/kclient_workunit_suites_dbench.yaml} | 3 | |
dead | 3269659 | 2018-11-19 05:21:37 | 2018-11-30 20:29:35 | 2018-12-01 08:36:45 | 12:07:10 | ovh | master | centos | 7.4 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/kclient_workunit_suites_ffsb.yaml} | 3 | |||
fail | 3269660 | 2018-11-19 05:21:38 | 2018-11-30 20:33:32 | 2018-11-30 22:23:33 | 1:50:01 | ovh | master | ubuntu | 18.04 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/pool-perm.yaml whitelist_health.yaml} | 6 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh097.front.sepia.ceph.com |
||||||||||||||
pass | 3269661 | 2018-11-19 05:21:39 | 2018-11-30 20:41:30 | 2018-11-30 21:25:30 | 0:44:00 | 0:30:56 | 0:13:04 | ovh | master | ubuntu | 16.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/kclient_workunit_suites_fsstress.yaml} | 3 | |
fail | 3269662 | 2018-11-19 05:21:40 | 2018-11-30 20:45:19 | 2018-11-30 21:03:18 | 0:17:59 | ovh | master | ubuntu | 18.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/kclient_workunit_suites_fsx.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh090.front.sepia.ceph.com |
||||||||||||||
pass | 3269663 | 2018-11-19 05:21:40 | 2018-11-30 20:49:30 | 2018-12-01 01:21:34 | 4:32:04 | 0:23:24 | 4:08:40 | ovh | master | ubuntu | 16.04 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/sessionmap.yaml whitelist_health.yaml} | 6 | |
fail | 3269664 | 2018-11-19 05:21:41 | 2018-11-30 20:56:47 | 2018-11-30 21:50:47 | 0:54:00 | 0:06:47 | 0:47:13 | ovh | master | rhel | 7.5 | kcephfs/thrash/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} thrashers/mon.yaml thrashosds-health.yaml whitelist_health.yaml workloads/kclient_workunit_suites_iozone.yaml} | 3 | |
Failure Reason:
Command failed on ovh059 with status 1: '\n sudo yum -y install ceph-radosgw\n ' |
||||||||||||||
fail | 3269665 | 2018-11-19 05:21:42 | 2018-11-30 21:03:26 | 2018-11-30 21:33:26 | 0:30:00 | ovh | master | ubuntu | 18.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/kclient_workunit_suites_fsync.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh027.front.sepia.ceph.com |
||||||||||||||
fail | 3269666 | 2018-11-19 05:21:43 | 2018-11-30 21:11:43 | 2018-11-30 23:47:45 | 2:36:02 | 0:08:34 | 2:27:28 | ovh | master | rhel | 7.5 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/strays.yaml whitelist_health.yaml} | 6 | |
Failure Reason:
Command failed on ovh037 with status 1: '\n sudo yum -y install ceph-radosgw\n ' |
||||||||||||||
pass | 3269667 | 2018-11-19 05:21:43 | 2018-11-30 21:13:21 | 2018-11-30 22:09:21 | 0:56:00 | 0:36:08 | 0:19:52 | ovh | master | centos | 7.4 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/kclient_workunit_suites_iozone.yaml} | 3 | |
pass | 3269668 | 2018-11-19 05:21:44 | 2018-11-30 21:13:21 | 2018-11-30 22:07:21 | 0:54:00 | 0:21:29 | 0:32:31 | ovh | master | ubuntu | 16.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/kclient_workunit_suites_pjd.yaml} | 3 | |
dead | 3269669 | 2018-11-19 05:21:45 | 2018-11-30 21:25:38 | 2018-12-01 09:27:49 | 12:02:11 | ovh | master | rhel | 7.5 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/volume-client.yaml whitelist_health.yaml} | — | |||
pass | 3269670 | 2018-11-19 05:21:46 | 2018-11-30 21:27:19 | 2018-11-30 22:11:19 | 0:44:00 | 0:19:49 | 0:24:11 | ovh | master | centos | 7.4 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/kclient_workunit_trivial_sync.yaml} | 3 | |
fail | 3269671 | 2018-11-19 05:21:46 | 2018-11-30 21:33:30 | 2018-11-30 22:09:30 | 0:36:00 | ovh | master | ubuntu | 18.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/kclient_workunit_direct_io.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh094.front.sepia.ceph.com |
||||||||||||||
pass | 3269672 | 2018-11-19 05:21:47 | 2018-11-30 21:35:08 | 2018-11-30 23:11:09 | 1:36:01 | 0:47:49 | 0:48:12 | ovh | master | ubuntu | 16.04 | kcephfs/mixed-clients/{begin.yaml clusters/1-mds-2-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/kernel_cfuse_workunits_dbench_iozone.yaml} | 4 | |
fail | 3269673 | 2018-11-19 05:21:48 | 2018-11-30 21:45:39 | 2018-12-01 09:35:51 | 11:50:12 | ovh | master | ubuntu | 18.04 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/auto-repair.yaml whitelist_health.yaml} | 6 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh041.front.sepia.ceph.com |
||||||||||||||
fail | 3269674 | 2018-11-19 05:21:49 | 2018-11-30 21:51:00 | 2018-11-30 23:11:01 | 1:20:01 | 1:07:45 | 0:12:16 | ovh | master | ubuntu | 16.04 | kcephfs/thrash/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} thrashers/default.yaml thrashosds-health.yaml whitelist_health.yaml workloads/kclient_workunit_suites_ffsb.yaml} | 3 | |
Failure Reason:
"2018-11-30 22:31:00.353343 mon.b mon.0 158.69.71.11:6789/0 2309 : cluster [WRN] Health check failed: 1 MDSs report slow metadata IOs (MDS_SLOW_METADATA_IO)" in cluster log |
||||||||||||||
pass | 3269675 | 2018-11-19 05:21:49 | 2018-11-30 22:07:04 | 2018-11-30 23:25:04 | 1:18:00 | 0:56:03 | 0:21:57 | ovh | master | centos | 7.4 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/kclient_workunit_kernel_untar_build.yaml} | 3 | |
pass | 3269676 | 2018-11-19 05:21:50 | 2018-11-30 22:07:09 | 2018-12-01 02:47:13 | 4:40:04 | 0:22:12 | 4:17:52 | ovh | master | centos | 7.4 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/backtrace.yaml whitelist_health.yaml} | 6 | |
pass | 3269677 | 2018-11-19 05:21:51 | 2018-11-30 22:07:22 | 2018-11-30 23:37:23 | 1:30:01 | 1:05:27 | 0:24:34 | ovh | master | ubuntu | 16.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/kclient_workunit_misc.yaml} | 3 | |
fail | 3269678 | 2018-11-19 05:21:52 | 2018-11-30 22:09:35 | 2018-11-30 22:29:34 | 0:19:59 | ovh | master | ubuntu | 18.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/kclient_workunit_o_trunc.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh093.front.sepia.ceph.com |
||||||||||||||
pass | 3269679 | 2018-11-19 05:21:52 | 2018-11-30 22:09:35 | 2018-12-01 04:23:40 | 6:14:05 | 0:32:54 | 5:41:11 | ovh | master | centos | 7.4 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/client-limits.yaml whitelist_health.yaml} | 6 | |
fail | 3269680 | 2018-11-19 05:21:53 | 2018-11-30 22:11:32 | 2018-11-30 22:47:31 | 0:35:59 | ovh | master | ubuntu | 18.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/kclient_workunit_snaps.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh050.front.sepia.ceph.com |
||||||||||||||
pass | 3269681 | 2018-11-19 05:21:54 | 2018-11-30 22:17:11 | 2018-11-30 23:01:11 | 0:44:00 | 0:34:48 | 0:09:12 | ovh | master | ubuntu | 16.04 | kcephfs/thrash/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} thrashers/mds.yaml thrashosds-health.yaml whitelist_health.yaml workloads/kclient_workunit_suites_iozone.yaml} | 3 | |
dead | 3269682 | 2018-11-19 05:21:55 | 2018-11-30 22:23:46 | 2018-12-01 10:35:49 | 12:12:03 | ovh | master | centos | 7.4 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/client-recovery.yaml whitelist_health.yaml} | 6 | |||
fail | 3269683 | 2018-11-19 05:21:56 | 2018-11-30 22:26:55 | 2018-11-30 22:46:55 | 0:20:00 | ovh | master | ubuntu | 18.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/kclient_workunit_suites_dbench.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh099.front.sepia.ceph.com |
||||||||||||||
dead | 3269684 | 2018-11-19 05:21:56 | 2018-11-30 22:28:57 | 2018-12-01 10:31:32 | 12:02:35 | ovh | master | ubuntu | 16.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/kclient_workunit_suites_ffsb.yaml} | 3 | |||
dead | 3269685 | 2018-11-19 05:21:57 | 2018-11-30 22:29:35 | 2018-12-01 10:31:47 | 12:02:12 | ovh | master | rhel | 7.5 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/config-commands.yaml whitelist_health.yaml} | — | |||
fail | 3269686 | 2018-11-19 05:21:58 | 2018-11-30 22:39:40 | 2018-11-30 23:27:40 | 0:48:00 | 0:06:26 | 0:41:34 | ovh | master | rhel | 7.5 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/kclient_workunit_suites_fsstress.yaml} | 3 | |
Failure Reason:
Command failed on ovh051 with status 1: '\n sudo yum -y install ceph-radosgw\n ' |
||||||||||||||
fail | 3269687 | 2018-11-19 05:21:59 | 2018-11-30 22:47:08 | 2018-11-30 23:59:08 | 1:12:00 | 0:06:44 | 1:05:16 | ovh | master | rhel | 7.5 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/kclient_workunit_suites_fsx.yaml} | 3 | |
Failure Reason:
Command failed on ovh087 with status 1: '\n sudo yum -y install ceph-radosgw\n ' |
||||||||||||||
pass | 3269688 | 2018-11-19 05:21:59 | 2018-11-30 22:47:32 | 2018-12-01 00:51:34 | 2:04:02 | 0:31:20 | 1:32:42 | ovh | master | ubuntu | 16.04 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/damage.yaml whitelist_health.yaml} | 6 | |
pass | 3269689 | 2018-11-19 05:22:00 | 2018-11-30 22:50:02 | 2018-12-01 00:02:03 | 1:12:01 | 0:45:36 | 0:26:25 | ovh | master | ubuntu | 16.04 | kcephfs/thrash/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} thrashers/mon.yaml thrashosds-health.yaml whitelist_health.yaml workloads/kclient_workunit_suites_ffsb.yaml} | 3 | |
fail | 3269690 | 2018-11-19 05:22:01 | 2018-11-30 23:01:25 | 2018-11-30 23:55:25 | 0:54:00 | 0:06:38 | 0:47:22 | ovh | master | rhel | 7.5 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/kclient_workunit_suites_fsync.yaml} | 3 | |
Failure Reason:
Command failed on ovh097 with status 1: '\n sudo yum -y install ceph-radosgw\n ' |
||||||||||||||
pass | 3269691 | 2018-11-19 05:22:02 | 2018-11-30 23:11:08 | 2018-12-01 05:11:13 | 6:00:05 | 0:43:07 | 5:16:58 | ovh | master | centos | 7.4 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/data-scan.yaml whitelist_health.yaml} | 6 | |
pass | 3269692 | 2018-11-19 05:22:03 | 2018-11-30 23:11:08 | 2018-12-01 00:27:08 | 1:16:00 | 0:36:57 | 0:39:03 | ovh | master | centos | 7.4 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/kclient_workunit_suites_iozone.yaml} | 3 | |
fail | 3269693 | 2018-11-19 05:22:03 | 2018-11-30 23:11:10 | 2018-11-30 23:39:09 | 0:27:59 | ovh | master | ubuntu | 18.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/kclient_workunit_suites_pjd.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh072.front.sepia.ceph.com |
||||||||||||||
pass | 3269694 | 2018-11-19 05:22:04 | 2018-11-30 23:18:07 | 2018-12-01 06:14:14 | 6:56:07 | 0:36:20 | 6:19:47 | ovh | master | ubuntu | 16.04 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/failover.yaml whitelist_health.yaml} | 6 | |
pass | 3269695 | 2018-11-19 05:22:05 | 2018-11-30 23:24:47 | 2018-11-30 23:52:47 | 0:28:00 | 0:19:05 | 0:08:55 | ovh | master | ubuntu | 16.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/kclient_workunit_trivial_sync.yaml} | 3 | |
fail | 3269696 | 2018-11-19 05:22:06 | 2018-11-30 23:25:05 | 2018-11-30 23:43:05 | 0:18:00 | ovh | master | ubuntu | 18.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/kclient_workunit_direct_io.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh068.front.sepia.ceph.com |
||||||||||||||
fail | 3269697 | 2018-11-19 05:22:07 | 2018-11-30 23:27:53 | 2018-12-01 01:17:54 | 1:50:01 | 1:08:12 | 0:41:49 | ovh | master | centos | 7.4 | kcephfs/mixed-clients/{begin.yaml clusters/1-mds-2-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/kernel_cfuse_workunits_untarbuild_blogbench.yaml} | 4 | |
Failure Reason:
"2018-12-01 00:45:26.904727 mon.b mon.0 158.69.66.163:6789/0 231 : cluster [ERR] Health check failed: mon b is very low on available space (MON_DISK_CRIT)" in cluster log |
||||||||||||||
dead | 3269698 | 2018-11-19 05:22:07 | 2018-11-30 23:37:35 | 2018-12-01 11:39:47 | 12:02:12 | ovh | master | ubuntu | 16.04 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/forward-scrub.yaml whitelist_health.yaml} | — | |||
fail | 3269699 | 2018-11-19 05:22:08 | 2018-11-30 23:39:22 | 2018-12-01 01:03:23 | 1:24:01 | 1:06:56 | 0:17:05 | ovh | master | centos | 7.4 | kcephfs/thrash/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} thrashers/default.yaml thrashosds-health.yaml whitelist_health.yaml workloads/kclient_workunit_suites_iozone.yaml} | 3 | |
Failure Reason:
"2018-12-01 00:29:27.571641 mon.a mon.0 158.69.65.47:6789/0 2728 : cluster [WRN] Health check failed: 1 MDSs report slow metadata IOs (MDS_SLOW_METADATA_IO)" in cluster log |
||||||||||||||
fail | 3269700 | 2018-11-19 05:22:09 | 2018-11-30 23:39:22 | 2018-12-01 00:01:22 | 0:22:00 | ovh | master | ubuntu | 18.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/kclient_workunit_kernel_untar_build.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh094.front.sepia.ceph.com |
||||||||||||||
fail | 3269701 | 2018-11-19 05:22:10 | 2018-11-30 23:43:17 | 2018-12-01 00:39:17 | 0:56:00 | 0:35:31 | 0:20:29 | ovh | master | ubuntu | 16.04 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/journal-repair.yaml whitelist_health.yaml} | 6 | |
Failure Reason:
Test failure: test_reset (tasks.cephfs.test_journal_repair.TestJournalRepair), test_reset (tasks.cephfs.test_journal_repair.TestJournalRepair) |
||||||||||||||
fail | 3269702 | 2018-11-19 05:22:11 | 2018-11-30 23:47:58 | 2018-12-01 00:45:58 | 0:58:00 | 0:06:17 | 0:51:43 | ovh | master | rhel | 7.5 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/kclient_workunit_misc.yaml} | 3 | |
Failure Reason:
Command failed on ovh095 with status 1: '\n sudo yum -y install ceph-radosgw\n ' |
||||||||||||||
pass | 3269703 | 2018-11-19 05:22:11 | 2018-11-30 23:53:00 | 2018-12-01 00:43:00 | 0:50:00 | 0:30:23 | 0:19:37 | ovh | master | centos | 7.4 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/kclient_workunit_o_trunc.yaml} | 3 | |
dead | 3269704 | 2018-11-19 05:22:12 | 2018-11-30 23:55:23 | 2018-12-01 11:57:34 | 12:02:11 | ovh | master | centos | 7.4 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/mds-flush.yaml whitelist_health.yaml} | — | |||
fail | 3269705 | 2018-11-19 05:22:13 | 2018-11-30 23:55:26 | 2018-12-01 00:51:26 | 0:56:00 | 0:06:26 | 0:49:34 | ovh | master | rhel | 7.5 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/kclient_workunit_snaps.yaml} | 3 | |
Failure Reason:
Command failed on ovh097 with status 1: '\n sudo yum -y install ceph-radosgw\n ' |
||||||||||||||
fail | 3269706 | 2018-11-19 05:22:14 | 2018-11-30 23:57:14 | 2018-12-01 00:55:14 | 0:58:00 | 0:39:48 | 0:18:12 | ovh | master | ubuntu | 16.04 | kcephfs/thrash/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} thrashers/mds.yaml thrashosds-health.yaml whitelist_health.yaml workloads/kclient_workunit_suites_ffsb.yaml} | 3 | |
Failure Reason:
Command failed (workunit test suites/ffsb.sh) on ovh060 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=4bfd25addd77b8ea785c8b84a073cff0c477e906 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/ffsb.sh' |
||||||||||||||
fail | 3269707 | 2018-11-19 05:22:15 | 2018-11-30 23:59:21 | 2018-12-01 02:21:23 | 2:22:02 | ovh | master | ubuntu | 18.04 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/mds-full.yaml whitelist_health.yaml} | 6 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh011.front.sepia.ceph.com |
||||||||||||||
fail | 3269708 | 2018-11-19 05:22:15 | 2018-12-01 00:01:34 | 2018-12-01 00:19:34 | 0:18:00 | ovh | master | ubuntu | 18.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/kclient_workunit_suites_dbench.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh049.front.sepia.ceph.com |
||||||||||||||
fail | 3269709 | 2018-11-19 05:22:16 | 2018-12-01 00:02:04 | 2018-12-01 01:56:05 | 1:54:01 | 1:33:34 | 0:20:27 | ovh | master | centos | 7.4 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/kclient_workunit_suites_ffsb.yaml} | 3 | |
Failure Reason:
"2018-12-01 00:44:07.199300 mon.b mon.0 158.69.68.170:6789/0 143 : cluster [WRN] Health check failed: 1 MDSs report slow metadata IOs (MDS_SLOW_METADATA_IO)" in cluster log |
||||||||||||||
pass | 3269710 | 2018-11-19 05:22:17 | 2018-12-01 00:05:08 | 2018-12-01 02:29:10 | 2:24:02 | 0:22:52 | 2:01:10 | ovh | master | centos | 7.4 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/pool-perm.yaml whitelist_health.yaml} | 6 | |
fail | 3269711 | 2018-11-19 05:22:18 | 2018-12-01 00:13:46 | 2018-12-01 01:13:47 | 1:00:01 | 0:06:34 | 0:53:27 | ovh | master | rhel | 7.5 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/kclient_workunit_suites_fsstress.yaml} | 3 | |
Failure Reason:
Command failed on ovh024 with status 1: '\n sudo yum -y install ceph-radosgw\n ' |
||||||||||||||
pass | 3269712 | 2018-11-19 05:22:19 | 2018-12-01 00:15:28 | 2018-12-01 01:39:29 | 1:24:01 | 0:49:48 | 0:34:13 | ovh | master | ubuntu | 16.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/kclient_workunit_suites_fsx.yaml} | 3 | |
pass | 3269713 | 2018-11-19 05:22:19 | 2018-12-01 00:19:46 | 2018-12-01 03:59:49 | 3:40:03 | 0:24:31 | 3:15:32 | ovh | master | centos | 7.4 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/sessionmap.yaml whitelist_health.yaml} | 6 | |
fail | 3269714 | 2018-11-19 05:22:20 | 2018-12-01 00:27:21 | 2018-12-01 00:47:20 | 0:19:59 | ovh | master | ubuntu | 18.04 | kcephfs/thrash/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} thrashers/mon.yaml thrashosds-health.yaml whitelist_health.yaml workloads/kclient_workunit_suites_iozone.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh031.front.sepia.ceph.com |
||||||||||||||
pass | 3269715 | 2018-11-19 05:22:21 | 2018-12-01 00:39:30 | 2018-12-01 01:19:29 | 0:39:59 | 0:25:33 | 0:14:26 | ovh | master | centos | 7.4 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/kclient_workunit_suites_fsync.yaml} | 3 | |
dead | 3269716 | 2018-11-19 05:22:22 | 2018-12-01 00:42:57 | 2018-12-01 12:45:09 | 12:02:12 | ovh | master | ubuntu | 16.04 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/strays.yaml whitelist_health.yaml} | — | |||
pass | 3269717 | 2018-11-19 05:22:22 | 2018-12-01 00:43:01 | 2018-12-01 01:51:01 | 1:08:00 | 0:51:11 | 0:16:49 | ovh | master | ubuntu | 16.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/kclient_workunit_suites_iozone.yaml} | 3 | |
fail | 3269718 | 2018-11-19 05:22:23 | 2018-12-01 00:45:49 | 2018-12-01 00:59:48 | 0:13:59 | ovh | master | ubuntu | 18.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/kclient_workunit_suites_pjd.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh034.front.sepia.ceph.com |
||||||||||||||
pass | 3269719 | 2018-11-19 05:22:24 | 2018-12-01 00:45:59 | 2018-12-01 12:46:10 | 12:00:11 | 0:36:29 | 11:23:42 | ovh | master | ubuntu | 16.04 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/volume-client.yaml whitelist_health.yaml} | 6 | |
pass | 3269720 | 2018-11-19 05:22:25 | 2018-12-01 00:47:33 | 2018-12-01 01:21:32 | 0:33:59 | 0:19:32 | 0:14:27 | ovh | master | centos | 7.4 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/kclient_workunit_trivial_sync.yaml} | 3 | |
fail | 3269721 | 2018-11-19 05:22:26 | 2018-12-01 00:50:42 | 2018-12-01 01:04:41 | 0:13:59 | ovh | master | ubuntu | 18.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/kclient_workunit_direct_io.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh080.front.sepia.ceph.com |
||||||||||||||
fail | 3269722 | 2018-11-19 05:22:26 | 2018-12-01 00:51:27 | 2018-12-01 02:05:27 | 1:14:00 | 0:06:51 | 1:07:09 | ovh | master | rhel | 7.5 | kcephfs/mixed-clients/{begin.yaml clusters/1-mds-2-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/kernel_cfuse_workunits_dbench_iozone.yaml} | 4 | |
Failure Reason:
Command failed on ovh040 with status 1: '\n sudo yum -y install ceph-radosgw\n ' |
||||||||||||||
pass | 3269723 | 2018-11-19 05:22:27 | 2018-12-01 00:51:35 | 2018-12-01 02:55:36 | 2:04:01 | 0:22:54 | 1:41:07 | ovh | master | centos | 7.4 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/auto-repair.yaml whitelist_health.yaml} | 6 | |
fail | 3269724 | 2018-11-19 05:22:28 | 2018-12-01 00:55:25 | 2018-12-01 01:53:25 | 0:58:00 | 0:07:55 | 0:50:05 | ovh | master | rhel | 7.5 | kcephfs/thrash/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} thrashers/default.yaml thrashosds-health.yaml whitelist_health.yaml workloads/kclient_workunit_suites_ffsb.yaml} | 3 | |
Failure Reason:
Command failed on ovh081 with status 1: '\n sudo yum -y install ceph-radosgw\n ' |
||||||||||||||
pass | 3269725 | 2018-11-19 05:22:29 | 2018-12-01 00:55:25 | 2018-12-01 02:11:26 | 1:16:01 | 0:58:05 | 0:17:56 | ovh | master | centos | 7.4 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/kclient_workunit_kernel_untar_build.yaml} | 3 | |
pass | 3269726 | 2018-11-19 05:22:29 | 2018-12-01 00:59:32 | 2018-12-01 01:51:32 | 0:52:00 | 0:22:25 | 0:29:35 | ovh | master | centos | 7.4 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/backtrace.yaml whitelist_health.yaml} | 6 | |
fail | 3269727 | 2018-11-19 05:22:30 | 2018-12-01 00:59:49 | 2018-12-01 02:13:50 | 1:14:01 | 0:07:16 | 1:06:45 | ovh | master | rhel | 7.5 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/kclient_workunit_misc.yaml} | 3 | |
Failure Reason:
Command failed on ovh087 with status 1: '\n sudo yum -y install ceph-radosgw\n ' |
||||||||||||||
pass | 3269728 | 2018-11-19 05:22:31 | 2018-12-01 01:03:35 | 2018-12-01 01:55:35 | 0:52:00 | 0:29:33 | 0:22:27 | ovh | master | ubuntu | 16.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/kclient_workunit_o_trunc.yaml} | 3 | |
pass | 3269729 | 2018-11-19 05:22:32 | 2018-12-01 01:04:54 | 2018-12-01 02:14:54 | 1:10:00 | 0:29:27 | 0:40:33 | ovh | master | ubuntu | 16.04 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/client-limits.yaml whitelist_health.yaml} | 6 | |
fail | 3269730 | 2018-11-19 05:22:32 | 2018-12-01 01:13:59 | 2018-12-01 01:31:59 | 0:18:00 | ovh | master | ubuntu | 18.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/kclient_workunit_snaps.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh024.front.sepia.ceph.com |
||||||||||||||
fail | 3269731 | 2018-11-19 05:22:33 | 2018-12-01 01:18:07 | 2018-12-01 01:46:07 | 0:28:00 | ovh | master | ubuntu | 18.04 | kcephfs/thrash/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} thrashers/mds.yaml thrashosds-health.yaml whitelist_health.yaml workloads/kclient_workunit_suites_iozone.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh098.front.sepia.ceph.com |
||||||||||||||
dead | 3269732 | 2018-11-19 05:22:34 | 2018-12-01 01:19:41 | 2018-12-01 13:22:24 | 12:02:43 | ovh | master | centos | 7.4 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/client-recovery.yaml whitelist_health.yaml} | 6 | |||
pass | 3269733 | 2018-11-19 05:22:35 | 2018-12-01 01:21:45 | 2018-12-01 02:33:45 | 1:12:00 | 0:54:06 | 0:17:54 | ovh | master | centos | 7.4 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/kclient_workunit_suites_dbench.yaml} | 3 | |
pass | 3269734 | 2018-11-19 05:22:35 | 2018-12-01 01:21:45 | 2018-12-01 02:27:45 | 1:06:00 | 0:44:43 | 0:21:17 | ovh | master | centos | 7.4 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/kclient_workunit_suites_ffsb.yaml} | 3 | |
pass | 3269735 | 2018-11-19 05:22:36 | 2018-12-01 01:27:11 | 2018-12-01 03:09:12 | 1:42:01 | 0:23:04 | 1:18:57 | ovh | master | centos | 7.4 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/config-commands.yaml whitelist_health.yaml} | 6 | |
fail | 3269736 | 2018-11-19 05:22:37 | 2018-12-01 01:31:06 | 2018-12-01 02:19:06 | 0:48:00 | 0:06:32 | 0:41:28 | ovh | master | rhel | 7.5 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/kclient_workunit_suites_fsstress.yaml} | 3 | |
Failure Reason:
Command failed on ovh037 with status 1: '\n sudo yum -y install ceph-radosgw\n ' |
||||||||||||||
pass | 3269737 | 2018-11-19 05:22:38 | 2018-12-01 01:32:12 | 2018-12-01 02:58:13 | 1:26:01 | 0:57:08 | 0:28:53 | ovh | master | centos | 7.4 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/kclient_workunit_suites_fsx.yaml} | 3 | |
pass | 3269738 | 2018-11-19 05:22:38 | 2018-12-01 01:39:43 | 2018-12-01 13:37:54 | 11:58:11 | 0:34:59 | 11:23:12 | ovh | master | centos | 7.4 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/damage.yaml whitelist_health.yaml} | 6 | |
dead | 3269739 | 2018-11-19 05:22:39 | 2018-12-01 01:47:48 | 2018-12-01 13:50:11 | 12:02:23 | ovh | master | centos | 7.4 | kcephfs/thrash/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} thrashers/mon.yaml thrashosds-health.yaml whitelist_health.yaml workloads/kclient_workunit_suites_ffsb.yaml} | 3 | |||
pass | 3269740 | 2018-11-19 05:22:40 | 2018-12-01 01:51:04 | 2018-12-01 02:51:04 | 1:00:00 | 0:20:29 | 0:39:31 | ovh | master | ubuntu | 16.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/kclient_workunit_suites_fsync.yaml} | 3 | |
dead | 3269741 | 2018-11-19 05:22:41 | 2018-12-01 01:51:33 | 2018-12-01 13:53:45 | 12:02:12 | ovh | master | centos | 7.4 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/data-scan.yaml whitelist_health.yaml} | — | |||
pass | 3269742 | 2018-11-19 05:22:41 | 2018-12-01 01:53:29 | 2018-12-01 03:01:29 | 1:08:00 | 0:42:53 | 0:25:07 | ovh | master | ubuntu | 16.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/kclient_workunit_suites_iozone.yaml} | 3 | |
fail | 3269743 | 2018-11-19 05:22:42 | 2018-12-01 01:55:38 | 2018-12-01 02:55:39 | 1:00:01 | 0:06:40 | 0:53:21 | ovh | master | rhel | 7.5 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/kclient_workunit_suites_pjd.yaml} | 3 | |
Failure Reason:
Command failed on ovh054 with status 1: '\n sudo yum -y install ceph-radosgw\n ' |
||||||||||||||
pass | 3269744 | 2018-11-19 05:22:43 | 2018-12-01 01:56:17 | 2018-12-01 06:06:20 | 4:10:03 | 0:37:20 | 3:32:43 | ovh | master | ubuntu | 16.04 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/failover.yaml whitelist_health.yaml} | 6 | |
pass | 3269745 | 2018-11-19 05:22:43 | 2018-12-01 02:05:31 | 2018-12-01 02:43:30 | 0:37:59 | 0:19:00 | 0:18:59 | ovh | master | ubuntu | 16.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/kclient_workunit_trivial_sync.yaml} | 3 | |
fail | 3269746 | 2018-11-19 05:22:44 | 2018-12-01 02:11:29 | 2018-12-01 03:25:29 | 1:14:00 | 0:06:48 | 1:07:12 | ovh | master | rhel | 7.5 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/kclient_workunit_direct_io.yaml} | 3 | |
Failure Reason:
Command failed on ovh100 with status 1: '\n sudo yum -y install ceph-radosgw\n ' |
||||||||||||||
fail | 3269747 | 2018-11-19 05:22:45 | 2018-12-01 02:13:53 | 2018-12-01 03:55:54 | 1:42:01 | 1:12:30 | 0:29:31 | ovh | master | centos | 7.4 | kcephfs/mixed-clients/{begin.yaml clusters/1-mds-2-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/kernel_cfuse_workunits_untarbuild_blogbench.yaml} | 4 | |
Failure Reason:
"2018-12-01 03:24:04.598981 mon.b mon.0 158.69.68.61:6789/0 234 : cluster [ERR] Health check failed: mon a is very low on available space (MON_DISK_CRIT)" in cluster log |
||||||||||||||
dead | 3269748 | 2018-11-19 05:22:46 | 2018-12-01 02:14:59 | 2018-12-01 14:17:10 | 12:02:11 | ovh | master | centos | 7.4 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/forward-scrub.yaml whitelist_health.yaml} | — | |||
fail | 3269749 | 2018-11-19 05:22:46 | 2018-12-01 02:19:09 | 2018-12-01 03:17:09 | 0:58:00 | 0:06:47 | 0:51:13 | ovh | master | rhel | 7.5 | kcephfs/thrash/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} thrashers/default.yaml thrashosds-health.yaml whitelist_health.yaml workloads/kclient_workunit_suites_iozone.yaml} | 3 | |
Failure Reason:
Command failed on ovh045 with status 1: '\n sudo yum -y install ceph-radosgw\n ' |
||||||||||||||
fail | 3269750 | 2018-11-19 05:22:47 | 2018-12-01 02:21:09 | 2018-12-01 03:27:09 | 1:06:00 | 0:06:43 | 0:59:17 | ovh | master | rhel | 7.5 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/kclient_workunit_kernel_untar_build.yaml} | 3 | |
Failure Reason:
Command failed on ovh015 with status 1: '\n sudo yum -y install ceph-radosgw\n ' |
||||||||||||||
dead | 3269751 | 2018-11-19 05:22:48 | 2018-12-01 02:21:24 | 2018-12-01 14:28:48 | 12:07:24 | ovh | master | ubuntu | 16.04 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/journal-repair.yaml whitelist_health.yaml} | 6 | |||
pass | 3269752 | 2018-11-19 05:22:49 | 2018-12-01 03:23:33 | 2018-12-01 05:21:34 | 1:58:01 | 1:05:57 | 0:52:04 | ovh | master | centos | 7.4 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/kclient_workunit_misc.yaml} | 3 | |
fail | 3269753 | 2018-11-19 05:22:49 | 2018-12-01 03:23:41 | 2018-12-01 03:55:40 | 0:31:59 | ovh | master | ubuntu | 18.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/kclient_workunit_o_trunc.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh087.front.sepia.ceph.com |
||||||||||||||
fail | 3269754 | 2018-11-19 05:22:50 | 2018-12-01 03:25:33 | 2018-12-01 06:25:35 | 3:00:02 | ovh | master | ubuntu | 18.04 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/mds-flush.yaml whitelist_health.yaml} | 6 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh011.front.sepia.ceph.com |
||||||||||||||
pass | 3269755 | 2018-11-19 05:22:51 | 2018-12-01 03:27:24 | 2018-12-01 05:35:25 | 2:08:01 | 1:53:40 | 0:14:21 | ovh | master | ubuntu | 16.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/kclient_workunit_snaps.yaml} | 3 | |
fail | 3269756 | 2018-11-19 05:22:52 | 2018-12-01 03:35:49 | 2018-12-01 04:37:50 | 1:02:01 | 0:06:39 | 0:55:22 | ovh | master | rhel | 7.5 | kcephfs/thrash/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} thrashers/mds.yaml thrashosds-health.yaml whitelist_health.yaml workloads/kclient_workunit_suites_ffsb.yaml} | 3 | |
Failure Reason:
Command failed on ovh092 with status 1: '\n sudo yum -y install ceph-radosgw\n ' |
||||||||||||||
fail | 3269757 | 2018-11-19 05:22:52 | 2018-12-01 03:39:25 | 2018-12-01 14:33:35 | 10:54:10 | 0:08:29 | 10:45:41 | ovh | master | rhel | 7.5 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/mds-full.yaml whitelist_health.yaml} | 6 | |
Failure Reason:
Command failed on ovh048 with status 1: '\n sudo yum -y install ceph-radosgw\n ' |
||||||||||||||
pass | 3269758 | 2018-11-19 05:22:53 | 2018-12-01 03:45:50 | 2018-12-01 05:05:50 | 1:20:00 | 0:54:49 | 0:25:11 | ovh | master | ubuntu | 16.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/kclient_workunit_suites_dbench.yaml} | 3 | |
fail | 3269759 | 2018-11-19 05:22:54 | 2018-12-01 03:51:42 | 2018-12-01 04:09:41 | 0:17:59 | ovh | master | ubuntu | 18.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/kclient_workunit_suites_ffsb.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh052.front.sepia.ceph.com |
||||||||||||||
pass | 3269760 | 2018-11-19 05:22:55 | 2018-12-01 03:55:48 | 2018-12-01 12:57:56 | 9:02:08 | 0:21:40 | 8:40:28 | ovh | master | ubuntu | 16.04 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/pool-perm.yaml whitelist_health.yaml} | 6 | |
pass | 3269761 | 2018-11-19 05:22:55 | 2018-12-01 03:55:55 | 2018-12-01 04:53:55 | 0:58:00 | 0:30:37 | 0:27:23 | ovh | master | centos | 7.4 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/kclient_workunit_suites_fsstress.yaml} | 3 | |
fail | 3269762 | 2018-11-19 05:22:56 | 2018-12-01 03:59:51 | 2018-12-01 04:31:50 | 0:31:59 | ovh | master | ubuntu | 18.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/kclient_workunit_suites_fsx.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh087.front.sepia.ceph.com |
||||||||||||||
pass | 3269763 | 2018-11-19 05:22:57 | 2018-12-01 04:01:50 | 2018-12-01 12:11:57 | 8:10:07 | 0:24:40 | 7:45:27 | ovh | master | centos | 7.4 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/sessionmap.yaml whitelist_health.yaml} | 6 | |
fail | 3269764 | 2018-11-19 05:22:58 | 2018-12-01 04:09:42 | 2018-12-01 05:21:42 | 1:12:00 | 0:57:15 | 0:14:45 | ovh | master | ubuntu | 16.04 | kcephfs/thrash/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} thrashers/mon.yaml thrashosds-health.yaml whitelist_health.yaml workloads/kclient_workunit_suites_iozone.yaml} | 3 | |
Failure Reason:
"2018-12-01 04:54:53.515034 mon.b mon.1 158.69.65.50:6789/0 17 : cluster [WRN] Health check failed: 1 MDSs report slow metadata IOs (MDS_SLOW_METADATA_IO)" in cluster log |
||||||||||||||
fail | 3269765 | 2018-11-19 05:22:58 | 2018-12-01 04:19:37 | 2018-12-01 04:33:36 | 0:13:59 | ovh | master | ubuntu | 18.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/kclient_workunit_suites_fsync.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh050.front.sepia.ceph.com |
||||||||||||||
pass | 3269766 | 2018-11-19 05:22:59 | 2018-12-01 04:23:44 | 2018-12-01 12:51:52 | 8:28:08 | 0:47:09 | 7:40:59 | ovh | master | centos | 7.4 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/strays.yaml whitelist_health.yaml} | 6 | |
pass | 3269767 | 2018-11-19 05:23:00 | 2018-12-01 04:31:53 | 2018-12-01 05:27:54 | 0:56:01 | 0:40:29 | 0:15:32 | ovh | master | centos | 7.4 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/kclient_workunit_suites_iozone.yaml} | 3 | |
fail | 3269768 | 2018-11-19 05:23:01 | 2018-12-01 04:33:46 | 2018-12-01 04:55:45 | 0:21:59 | ovh | master | ubuntu | 18.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/kclient_workunit_suites_pjd.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh084.front.sepia.ceph.com |
||||||||||||||
pass | 3269769 | 2018-11-19 05:23:01 | 2018-12-01 04:37:54 | 2018-12-01 15:56:09 | 11:18:15 | 0:41:46 | 10:36:29 | ovh | master | centos | 7.4 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/volume-client.yaml whitelist_health.yaml} | 6 | |
fail | 3269770 | 2018-11-19 05:23:02 | 2018-12-01 04:53:58 | 2018-12-01 05:53:58 | 1:00:00 | 0:08:41 | 0:51:19 | ovh | master | rhel | 7.5 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/kclient_workunit_trivial_sync.yaml} | 3 | |
Failure Reason:
Command failed on ovh084 with status 1: '\n sudo yum -y install ceph-radosgw\n ' |
||||||||||||||
fail | 3269771 | 2018-11-19 05:23:03 | 2018-12-01 04:55:49 | 2018-12-01 05:51:49 | 0:56:00 | 0:06:18 | 0:49:42 | ovh | master | rhel | 7.5 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/kclient_workunit_direct_io.yaml} | 3 | |
Failure Reason:
Command failed on ovh095 with status 1: '\n sudo yum -y install ceph-radosgw\n ' |
||||||||||||||
pass | 3269772 | 2018-11-19 05:23:04 | 2018-12-01 10:14:39 | 2018-12-01 12:24:40 | 2:10:01 | 0:48:43 | 1:21:18 | ovh | master | centos | 7.4 | kcephfs/mixed-clients/{begin.yaml clusters/1-mds-2-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/kernel_cfuse_workunits_dbench_iozone.yaml} | 4 | |
dead | 3269773 | 2018-11-19 05:23:04 | 2018-12-01 10:24:23 | 2018-12-01 22:26:35 | 12:02:12 | ovh | master | ubuntu | 16.04 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/auto-repair.yaml whitelist_health.yaml} | — | |||
fail | 3269774 | 2018-11-19 05:23:05 | 2018-12-01 10:26:03 | 2018-12-01 11:28:03 | 1:02:00 | 0:06:52 | 0:55:08 | ovh | master | rhel | 7.5 | kcephfs/thrash/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} thrashers/default.yaml thrashosds-health.yaml whitelist_health.yaml workloads/kclient_workunit_suites_ffsb.yaml} | 3 | |
Failure Reason:
Command failed on ovh065 with status 1: '\n sudo yum -y install ceph-radosgw\n ' |
||||||||||||||
pass | 3269775 | 2018-11-19 05:23:06 | 2018-12-01 10:30:45 | 2018-12-01 11:50:45 | 1:20:00 | 1:04:01 | 0:15:59 | ovh | master | ubuntu | 16.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/kclient_workunit_kernel_untar_build.yaml} | 3 | |
dead | 3269776 | 2018-11-19 05:23:07 | 2018-12-01 10:31:34 | 2018-12-01 22:33:46 | 12:02:12 | ovh | master | ubuntu | 18.04 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/backtrace.yaml whitelist_health.yaml} | — | |||
fail | 3269777 | 2018-11-19 05:23:07 | 2018-12-01 10:32:01 | 2018-12-01 11:36:01 | 1:04:00 | 0:06:42 | 0:57:18 | ovh | master | rhel | 7.5 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/kclient_workunit_misc.yaml} | 3 | |
Failure Reason:
Command failed on ovh023 with status 1: '\n sudo yum -y install ceph-radosgw\n ' |
||||||||||||||
pass | 3269778 | 2018-11-19 05:23:08 | 2018-12-01 10:32:06 | 2018-12-01 11:22:06 | 0:50:00 | 0:30:40 | 0:19:20 | ovh | master | centos | 7.4 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/kclient_workunit_o_trunc.yaml} | 3 | |
dead | 3269779 | 2018-11-19 05:23:09 | 2018-12-01 10:34:27 | 2018-12-01 22:36:39 | 12:02:12 | ovh | master | rhel | 7.5 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/client-limits.yaml whitelist_health.yaml} | — | |||
fail | 3269780 | 2018-11-19 05:23:09 | 2018-12-01 10:36:02 | 2018-12-01 12:30:03 | 1:54:01 | 1:26:12 | 0:27:49 | ovh | master | centos | 7.4 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/kclient_workunit_snaps.yaml} | 3 | |
Failure Reason:
"2018-12-01 11:15:49.695544 mon.b mon.0 158.69.66.202:6789/0 186 : cluster [WRN] Health check failed: 2 slow ops, oldest one blocked for 124 sec, mon.c has slow ops (SLOW_OPS)" in cluster log |
||||||||||||||
pass | 3269781 | 2018-11-19 05:23:10 | 2018-12-01 10:38:35 | 2018-12-01 12:12:36 | 1:34:01 | 1:07:01 | 0:27:00 | ovh | master | centos | 7.4 | kcephfs/thrash/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} thrashers/mds.yaml thrashosds-health.yaml whitelist_health.yaml workloads/kclient_workunit_suites_iozone.yaml} | 3 | |
dead | 3269782 | 2018-11-19 05:23:11 | 2018-12-01 10:38:35 | 2018-12-01 22:45:50 | 12:07:15 | ovh | master | ubuntu | 16.04 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/client-recovery.yaml whitelist_health.yaml} | 6 | |||
pass | 3269783 | 2018-11-19 05:23:12 | 2018-12-01 10:42:42 | 2018-12-01 11:46:42 | 1:04:00 | 0:50:13 | 0:13:47 | ovh | master | ubuntu | 16.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/kclient_workunit_suites_dbench.yaml} | 3 | |
fail | 3269784 | 2018-11-19 05:23:12 | 2018-12-01 10:42:48 | 2018-12-01 11:16:48 | 0:34:00 | 0:12:13 | 0:21:47 | ovh | master | ubuntu | 18.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/kclient_workunit_suites_ffsb.yaml} | 3 | |
Failure Reason:
{'ovh055.front.sepia.ceph.com': {'_ansible_parsed': True, 'stderr_lines': ['/bin/sh: 1: ifdown: not found'], 'cmd': 'ifdown ens3 && ifup ens3', 'end': '2018-12-01 11:08:53.892090', '_ansible_no_log': False, 'stdout': '', 'changed': True, 'invocation': {'module_args': {'creates': None, 'executable': None, 'chdir': None, '_raw_params': 'ifdown ens3 && ifup ens3', 'removes': None, 'warn': True, '_uses_shell': True, 'stdin': None}}, 'start': '2018-12-01 11:08:53.885501', 'delta': '0:00:00.006589', 'stderr': '/bin/sh: 1: ifdown: not found', 'rc': 127, 'msg': 'non-zero return code', 'stdout_lines': []}, 'ovh067.front.sepia.ceph.com': {'_ansible_parsed': True, 'stderr_lines': ['/bin/sh: 1: ifdown: not found'], 'cmd': 'ifdown ens3 && ifup ens3', 'end': '2018-12-01 11:08:49.682804', '_ansible_no_log': False, 'stdout': '', 'changed': True, 'invocation': {'module_args': {'creates': None, 'executable': None, 'chdir': None, '_raw_params': 'ifdown ens3 && ifup ens3', 'removes': None, 'warn': True, '_uses_shell': True, 'stdin': None}}, 'start': '2018-12-01 11:08:49.676235', 'delta': '0:00:00.006569', 'stderr': '/bin/sh: 1: ifdown: not found', 'rc': 127, 'msg': 'non-zero return code', 'stdout_lines': []}, 'ovh049.front.sepia.ceph.com': {'_ansible_parsed': True, 'stderr_lines': ['/bin/sh: 1: ifdown: not found'], 'cmd': 'ifdown ens3 && ifup ens3', 'end': '2018-12-01 11:10:29.249785', '_ansible_no_log': False, 'stdout': '', 'changed': True, 'invocation': {'module_args': {'creates': None, 'executable': None, 'chdir': None, '_raw_params': 'ifdown ens3 && ifup ens3', 'removes': None, 'warn': True, '_uses_shell': True, 'stdin': None}}, 'start': '2018-12-01 11:10:29.243910', 'delta': '0:00:00.005875', 'stderr': '/bin/sh: 1: ifdown: not found', 'rc': 127, 'msg': 'non-zero return code', 'stdout_lines': []}} |
||||||||||||||
pass | 3269785 | 2018-11-19 05:23:13 | 2018-12-01 10:44:34 | 2018-12-01 14:46:42 | 4:02:08 | 0:22:49 | 3:39:19 | ovh | master | centos | 7.4 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/config-commands.yaml whitelist_health.yaml} | 6 | |
pass | 3269786 | 2018-11-19 05:23:14 | 2018-12-01 10:48:35 | 2018-12-01 11:36:35 | 0:48:00 | 0:31:35 | 0:16:25 | ovh | master | ubuntu | 16.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/kclient_workunit_suites_fsstress.yaml} | 3 | |
pass | 3269787 | 2018-11-19 05:23:15 | 2018-12-01 10:52:46 | 2018-12-01 13:56:49 | 3:04:03 | 2:38:43 | 0:25:20 | ovh | master | ubuntu | 16.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/kclient_workunit_suites_fsx.yaml} | 3 | |
fail | 3269788 | 2018-11-19 05:23:15 | 2018-12-01 10:52:46 | 2018-12-01 14:58:49 | 4:06:03 | 0:08:39 | 3:57:24 | ovh | master | rhel | 7.5 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/damage.yaml whitelist_health.yaml} | 6 | |
Failure Reason:
Command failed on ovh020 with status 1: '\n sudo yum -y install ceph-radosgw\n ' |
||||||||||||||
pass | 3269789 | 2018-11-19 05:23:16 | 2018-12-01 11:02:51 | 2018-12-01 12:30:51 | 1:28:00 | 0:59:28 | 0:28:32 | ovh | master | ubuntu | 16.04 | kcephfs/thrash/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} thrashers/mon.yaml thrashosds-health.yaml whitelist_health.yaml workloads/kclient_workunit_suites_ffsb.yaml} | 3 | |
fail | 3269790 | 2018-11-19 05:23:17 | 2018-12-01 11:04:27 | 2018-12-01 12:02:27 | 0:58:00 | 0:06:41 | 0:51:19 | ovh | master | rhel | 7.5 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/kclient_workunit_suites_fsync.yaml} | 3 | |
Failure Reason:
Command failed on ovh059 with status 1: '\n sudo yum -y install ceph-radosgw\n ' |
||||||||||||||
pass | 3269791 | 2018-11-19 05:23:18 | 2018-12-01 11:04:27 | 2018-12-01 14:42:30 | 3:38:03 | 0:41:40 | 2:56:23 | ovh | master | centos | 7.4 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/data-scan.yaml whitelist_health.yaml} | 6 | |
fail | 3269792 | 2018-11-19 05:23:18 | 2018-12-01 11:05:11 | 2018-12-01 11:41:11 | 0:36:00 | 0:10:25 | 0:25:35 | ovh | master | ubuntu | 18.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/kclient_workunit_suites_iozone.yaml} | 3 | |
Failure Reason:
{'ovh067.front.sepia.ceph.com': {'_ansible_parsed': True, 'stderr_lines': ['/bin/sh: 1: ifdown: not found'], 'cmd': 'ifdown ens3 && ifup ens3', 'end': '2018-12-01 11:35:05.886682', '_ansible_no_log': False, 'stdout': '', 'changed': True, 'invocation': {'module_args': {'creates': None, 'executable': None, 'chdir': None, '_raw_params': 'ifdown ens3 && ifup ens3', 'removes': None, 'warn': True, '_uses_shell': True, 'stdin': None}}, 'start': '2018-12-01 11:35:05.880226', 'delta': '0:00:00.006456', 'stderr': '/bin/sh: 1: ifdown: not found', 'rc': 127, 'msg': 'non-zero return code', 'stdout_lines': []}, 'ovh022.front.sepia.ceph.com': {'_ansible_parsed': True, 'stderr_lines': ['/bin/sh: 1: ifdown: not found'], 'cmd': 'ifdown ens3 && ifup ens3', 'end': '2018-12-01 11:34:24.019193', '_ansible_no_log': False, 'stdout': '', 'changed': True, 'invocation': {'module_args': {'creates': None, 'executable': None, 'chdir': None, '_raw_params': 'ifdown ens3 && ifup ens3', 'removes': None, 'warn': True, '_uses_shell': True, 'stdin': None}}, 'start': '2018-12-01 11:34:24.013232', 'delta': '0:00:00.005961', 'stderr': '/bin/sh: 1: ifdown: not found', 'rc': 127, 'msg': 'non-zero return code', 'stdout_lines': []}, 'ovh049.front.sepia.ceph.com': {'_ansible_parsed': True, 'stderr_lines': ['/bin/sh: 1: ifdown: not found'], 'cmd': 'ifdown ens3 && ifup ens3', 'end': '2018-12-01 11:35:05.429181', '_ansible_no_log': False, 'stdout': '', 'changed': True, 'invocation': {'module_args': {'creates': None, 'executable': None, 'chdir': None, '_raw_params': 'ifdown ens3 && ifup ens3', 'removes': None, 'warn': True, '_uses_shell': True, 'stdin': None}}, 'start': '2018-12-01 11:35:05.423631', 'delta': '0:00:00.005550', 'stderr': '/bin/sh: 1: ifdown: not found', 'rc': 127, 'msg': 'non-zero return code', 'stdout_lines': []}} |
||||||||||||||
pass | 3269793 | 2018-11-19 05:23:19 | 2018-12-01 11:06:59 | 2018-12-01 12:02:59 | 0:56:00 | 0:20:25 | 0:35:35 | ovh | master | centos | 7.4 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/kclient_workunit_suites_pjd.yaml} | 3 | |
dead | 3269794 | 2018-11-19 05:23:20 | 2018-12-01 11:12:21 | 2018-12-01 23:14:34 | 12:02:13 | ovh | master | rhel | 7.5 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/failover.yaml whitelist_health.yaml} | — | |||
fail | 3269795 | 2018-11-19 05:23:20 | 2018-12-01 11:16:51 | 2018-12-01 12:20:51 | 1:04:00 | 0:06:41 | 0:57:19 | ovh | master | rhel | 7.5 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/kclient_workunit_trivial_sync.yaml} | 3 | |
Failure Reason:
Command failed on ovh084 with status 1: '\n sudo yum -y install ceph-radosgw\n ' |
||||||||||||||
fail | 3269796 | 2018-11-19 05:23:21 | 2018-12-01 11:22:24 | 2018-12-01 11:52:24 | 0:30:00 | ovh | master | ubuntu | 18.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/kclient_workunit_direct_io.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh042.front.sepia.ceph.com |
||||||||||||||
fail | 3269797 | 2018-11-19 05:23:22 | 2018-12-01 11:24:32 | 2018-12-01 11:48:31 | 0:23:59 | ovh | master | ubuntu | 18.04 | kcephfs/mixed-clients/{begin.yaml clusters/1-mds-2-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/kernel_cfuse_workunits_untarbuild_blogbench.yaml} | 4 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh016.front.sepia.ceph.com |
||||||||||||||
pass | 3269798 | 2018-11-19 05:23:23 | 2018-12-01 11:24:32 | 2018-12-01 15:58:35 | 4:34:03 | 0:25:12 | 4:08:51 | ovh | master | centos | 7.4 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/forward-scrub.yaml whitelist_health.yaml} | 6 | |
fail | 3269799 | 2018-11-19 05:23:23 | 2018-12-01 11:28:17 | 2018-12-01 12:24:17 | 0:56:00 | 0:40:05 | 0:15:55 | ovh | master | centos | 7.4 | kcephfs/thrash/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} thrashers/default.yaml thrashosds-health.yaml whitelist_health.yaml workloads/kclient_workunit_suites_iozone.yaml} | 3 | |
Failure Reason:
"2018-12-01 12:06:47.087714 mon.a mon.0 158.69.68.190:6789/0 1590 : cluster [WRN] Health check failed: 1 MDSs report slow metadata IOs (MDS_SLOW_METADATA_IO)" in cluster log |
||||||||||||||
pass | 3269800 | 2018-11-19 05:23:24 | 2018-12-01 11:34:36 | 2018-12-01 12:52:36 | 1:18:00 | 0:54:40 | 0:23:20 | ovh | master | centos | 7.4 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/kclient_workunit_kernel_untar_build.yaml} | 3 | |
fail | 3269801 | 2018-11-19 05:23:25 | 2018-12-01 11:34:36 | 2018-12-01 20:26:44 | 8:52:08 | 0:09:41 | 8:42:27 | ovh | master | rhel | 7.5 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/journal-repair.yaml whitelist_health.yaml} | 6 | |
Failure Reason:
Command failed on ovh031 with status 1: '\n sudo yum -y install ceph-radosgw\n ' |
||||||||||||||
fail | 3269802 | 2018-11-19 05:23:26 | 2018-12-01 11:36:15 | 2018-12-01 11:54:14 | 0:17:59 | ovh | master | ubuntu | 18.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/kclient_workunit_misc.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh040.front.sepia.ceph.com |
||||||||||||||
pass | 3269803 | 2018-11-19 05:23:26 | 2018-12-01 11:36:36 | 2018-12-01 12:28:36 | 0:52:00 | 0:29:33 | 0:22:27 | ovh | master | centos | 7.4 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/kclient_workunit_o_trunc.yaml} | 3 | |
pass | 3269804 | 2018-11-19 05:23:27 | 2018-12-01 11:39:17 | 2018-12-01 16:57:22 | 5:18:05 | 0:21:21 | 4:56:44 | ovh | master | ubuntu | 16.04 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/mds-flush.yaml whitelist_health.yaml} | 6 | |
fail | 3269805 | 2018-11-19 05:23:28 | 2018-12-01 11:39:59 | 2018-12-01 12:31:59 | 0:52:00 | 0:06:23 | 0:45:37 | ovh | master | rhel | 7.5 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/kclient_workunit_snaps.yaml} | 3 | |
Failure Reason:
Command failed on ovh049 with status 1: '\n sudo yum -y install ceph-radosgw\n ' |
||||||||||||||
fail | 3269806 | 2018-11-19 05:23:28 | 2018-12-01 11:41:25 | 2018-12-01 12:25:25 | 0:44:00 | 0:35:44 | 0:08:16 | ovh | master | ubuntu | 16.04 | kcephfs/thrash/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} thrashers/mds.yaml thrashosds-health.yaml whitelist_health.yaml workloads/kclient_workunit_suites_ffsb.yaml} | 3 | |
Failure Reason:
Command failed (workunit test suites/ffsb.sh) on ovh016 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=4bfd25addd77b8ea785c8b84a073cff0c477e906 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/ffsb.sh' |
||||||||||||||
fail | 3269807 | 2018-11-19 05:23:29 | 2018-12-01 11:43:37 | 2018-12-01 15:59:40 | 4:16:03 | 0:24:45 | 3:51:18 | ovh | master | ubuntu | 16.04 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/mds-full.yaml whitelist_health.yaml} | 6 | |
Failure Reason:
Test failure: test_full_fclose (tasks.cephfs.test_full.TestClusterFull) |
||||||||||||||
fail | 3269808 | 2018-11-19 05:23:30 | 2018-12-01 11:46:48 | 2018-12-01 12:22:47 | 0:35:59 | ovh | master | ubuntu | 18.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/kclient_workunit_suites_dbench.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh063.front.sepia.ceph.com |
||||||||||||||
fail | 3269809 | 2018-11-19 05:23:31 | 2018-12-01 11:48:44 | 2018-12-01 12:48:44 | 1:00:00 | 0:06:51 | 0:53:09 | ovh | master | rhel | 7.5 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/kclient_workunit_suites_ffsb.yaml} | 3 | |
Failure Reason:
Command failed on ovh039 with status 1: '\n sudo yum -y install ceph-radosgw\n ' |
||||||||||||||
dead | 3269810 | 2018-11-19 05:23:31 | 2018-12-01 11:50:48 | 2018-12-01 23:53:00 | 12:02:12 | ovh | master | centos | 7.4 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/pool-perm.yaml whitelist_health.yaml} | — | |||
fail | 3269811 | 2018-11-19 05:23:32 | 2018-12-01 11:52:38 | 2018-12-01 12:48:38 | 0:56:00 | 0:06:32 | 0:49:28 | ovh | master | rhel | 7.5 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/kclient_workunit_suites_fsstress.yaml} | 3 | |
Failure Reason:
Command failed on ovh009 with status 1: '\n sudo yum -y install ceph-radosgw\n ' |
||||||||||||||
fail | 3269812 | 2018-11-19 05:23:33 | 2018-12-01 11:54:29 | 2018-12-01 12:54:29 | 1:00:00 | 0:06:30 | 0:53:30 | ovh | master | rhel | 7.5 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/kclient_workunit_suites_fsx.yaml} | 3 | |
Failure Reason:
Command failed on ovh033 with status 1: '\n sudo yum -y install ceph-radosgw\n ' |
||||||||||||||
dead | 3269813 | 2018-11-19 05:23:33 | 2018-12-01 11:57:38 | 2018-12-01 23:59:49 | 12:02:11 | ovh | master | ubuntu | 16.04 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/sessionmap.yaml whitelist_health.yaml} | — | |||
pass | 3269814 | 2018-11-19 05:23:34 | 2018-12-01 12:02:39 | 2018-12-01 13:12:40 | 1:10:01 | 0:34:56 | 0:35:05 | ovh | master | centos | 7.4 | kcephfs/thrash/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} thrashers/mon.yaml thrashosds-health.yaml whitelist_health.yaml workloads/kclient_workunit_suites_iozone.yaml} | 3 | |
fail | 3269815 | 2018-11-19 05:23:35 | 2018-12-01 12:03:01 | 2018-12-01 12:29:00 | 0:25:59 | ovh | master | ubuntu | 18.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/kclient_workunit_suites_fsync.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh050.front.sepia.ceph.com |
||||||||||||||
pass | 3269816 | 2018-11-19 05:23:36 | 2018-12-01 12:10:32 | 2018-12-01 16:00:35 | 3:50:03 | 0:45:45 | 3:04:18 | ovh | master | centos | 7.4 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/strays.yaml whitelist_health.yaml} | 6 | |
pass | 3269817 | 2018-11-19 05:23:36 | 2018-12-01 12:12:11 | 2018-12-01 13:10:12 | 0:58:01 | 0:35:04 | 0:22:57 | ovh | master | ubuntu | 16.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/kclient_workunit_suites_iozone.yaml} | 3 | |
pass | 3269818 | 2018-11-19 05:23:37 | 2018-12-01 12:12:37 | 2018-12-01 13:12:37 | 1:00:00 | 0:24:28 | 0:35:32 | ovh | master | centos | 7.4 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/kclient_workunit_suites_pjd.yaml} | 3 | |
pass | 3269819 | 2018-11-19 05:23:38 | 2018-12-01 12:20:54 | 2018-12-01 13:56:55 | 1:36:01 | 0:46:42 | 0:49:19 | ovh | master | centos | 7.4 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/volume-client.yaml whitelist_health.yaml} | 6 | |
pass | 3269820 | 2018-11-19 05:23:38 | 2018-12-01 12:22:54 | 2018-12-01 13:08:54 | 0:46:00 | 0:21:57 | 0:24:03 | ovh | master | centos | 7.4 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/kclient_workunit_trivial_sync.yaml} | 3 |