User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Pass | Fail | Dead |
---|---|---|---|---|---|---|---|---|---|---|---|
teuthology | 2018-12-10 05:20:01 | 2018-12-23 05:42:01 | 2018-12-24 07:33:55 | 1 day, 1:51:54 | kcephfs | mimic | ovh | 88b1cef | 96 | 128 | 26 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
fail | 3322945 | 2018-12-10 05:20:21 | 2018-12-23 05:42:01 | 2018-12-23 07:00:02 | 1:18:01 | 0:07:13 | 1:10:48 | ovh | master | rhel | 7.5 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/kclient_workunit_direct_io.yaml} | 3 | |
Failure Reason:
Command failed on ovh020 with status 1: '\n sudo yum -y install ceph-radosgw\n ' |
||||||||||||||
fail | 3322946 | 2018-12-10 05:20:22 | 2018-12-23 05:44:47 | 2018-12-23 10:36:50 | 4:52:03 | 0:07:35 | 4:44:28 | ovh | master | rhel | 7.5 | kcephfs/mixed-clients/{begin.yaml clusters/1-mds-2-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/kernel_cfuse_workunits_dbench_iozone.yaml} | 4 | |
Failure Reason:
Command failed on ovh076 with status 1: '\n sudo yum -y install ceph-radosgw\n ' |
||||||||||||||
dead | 3322947 | 2018-12-10 05:20:22 | 2018-12-23 05:54:03 | 2018-12-23 18:05:51 | 12:11:48 | ovh | master | ubuntu | 16.04 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/auto-repair.yaml whitelist_health.yaml} | 6 | |||
fail | 3322948 | 2018-12-10 05:20:23 | 2018-12-23 05:56:00 | 2018-12-23 07:58:01 | 2:02:01 | 1:11:05 | 0:50:56 | ovh | master | ubuntu | 16.04 | kcephfs/thrash/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} thrashers/default.yaml thrashosds-health.yaml whitelist_health.yaml workloads/kclient_workunit_suites_ffsb.yaml} | 3 | |
Failure Reason:
"2018-12-23 07:08:10.380160 mon.b mon.0 158.69.70.196:6789/0 1513 : cluster [WRN] Health check failed: 1 MDSs report slow metadata IOs (MDS_SLOW_METADATA_IO)" in cluster log |
||||||||||||||
fail | 3322949 | 2018-12-10 05:20:24 | 2018-12-23 05:56:00 | 2018-12-23 07:04:00 | 1:08:00 | 0:09:28 | 0:58:32 | ovh | master | ubuntu | 16.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/kclient_workunit_kernel_untar_build.yaml} | 3 | |
Failure Reason:
while scanning a plain scalar in "/tmp/teuth_ansible_failures_8GwD5w", line 1, column 251 found unexpected ':' in "/tmp/teuth_ansible_failures_8GwD5w", line 1, column 271 Please check http://pyyaml.org/wiki/YAMLColonInFlowContext for details. |
||||||||||||||
dead | 3322950 | 2018-12-10 05:20:24 | 2018-12-23 06:21:34 | 2018-12-23 18:23:45 | 12:02:11 | ovh | master | ubuntu | 18.04 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/backtrace.yaml whitelist_health.yaml} | — | |||
pass | 3322951 | 2018-12-10 05:20:25 | 2018-12-23 06:32:47 | 2018-12-23 08:38:48 | 2:06:01 | 1:30:16 | 0:35:45 | ovh | master | centos | 7.4 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/kclient_workunit_misc.yaml} | 3 | |
fail | 3322952 | 2018-12-10 05:20:26 | 2018-12-23 06:38:06 | 2018-12-23 07:54:06 | 1:16:00 | 0:07:14 | 1:08:46 | ovh | master | rhel | 7.5 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/kclient_workunit_o_trunc.yaml} | 3 | |
Failure Reason:
Command failed on ovh087 with status 1: '\n sudo yum -y install ceph-radosgw\n ' |
||||||||||||||
fail | 3322953 | 2018-12-10 05:20:26 | 2018-12-23 06:40:03 | 2018-12-23 14:58:10 | 8:18:07 | ovh | master | ubuntu | 18.04 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/client-limits.yaml whitelist_health.yaml} | 6 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh054.front.sepia.ceph.com |
||||||||||||||
pass | 3322954 | 2018-12-10 05:20:27 | 2018-12-23 06:42:57 | 2018-12-23 08:22:58 | 1:40:01 | 1:20:10 | 0:19:51 | ovh | master | ubuntu | 16.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/kclient_workunit_snaps.yaml} | 3 | |
fail | 3322955 | 2018-12-10 05:20:28 | 2018-12-23 06:44:10 | 2018-12-23 08:28:11 | 1:44:01 | ovh | master | ubuntu | 18.04 | kcephfs/thrash/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} thrashers/mds.yaml thrashosds-health.yaml whitelist_health.yaml workloads/kclient_workunit_suites_iozone.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh059.front.sepia.ceph.com |
||||||||||||||
dead | 3322956 | 2018-12-10 05:20:28 | 2018-12-23 06:46:10 | 2018-12-23 18:48:21 | 12:02:11 | ovh | master | centos | 7.4 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/client-recovery.yaml whitelist_health.yaml} | — | |||
fail | 3322957 | 2018-12-10 05:20:29 | 2018-12-23 07:00:13 | 2018-12-23 07:58:13 | 0:58:00 | 0:07:23 | 0:50:37 | ovh | master | rhel | 7.5 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/kclient_workunit_suites_dbench.yaml} | 3 | |
Failure Reason:
Command failed on ovh042 with status 1: '\n sudo yum -y install ceph-radosgw\n ' |
||||||||||||||
fail | 3322958 | 2018-12-10 05:20:30 | 2018-12-23 07:01:52 | 2018-12-23 07:25:52 | 0:24:00 | ovh | master | ubuntu | 18.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/kclient_workunit_suites_ffsb.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh060.front.sepia.ceph.com |
||||||||||||||
pass | 3322959 | 2018-12-10 05:20:31 | 2018-12-23 07:03:52 | 2018-12-23 17:04:01 | 10:00:09 | 0:22:37 | 9:37:32 | ovh | master | centos | 7.4 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/config-commands.yaml whitelist_health.yaml} | 6 | |
pass | 3322960 | 2018-12-10 05:20:31 | 2018-12-23 07:04:02 | 2018-12-23 08:06:02 | 1:02:00 | 0:37:13 | 0:24:47 | ovh | master | centos | 7.4 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/kclient_workunit_suites_fsstress.yaml} | 3 | |
pass | 3322961 | 2018-12-10 05:20:32 | 2018-12-23 07:07:52 | 2018-12-23 08:31:52 | 1:24:00 | 0:56:35 | 0:27:25 | ovh | master | ubuntu | 16.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/kclient_workunit_suites_fsx.yaml} | 3 | |
dead | 3322962 | 2018-12-10 05:20:33 | 2018-12-23 07:14:01 | 2018-12-23 18:58:12 | 11:44:11 | ovh | master | rhel | 7.5 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/damage.yaml whitelist_health.yaml} | 6 | |||
Failure Reason:
SSH connection to ovh031 was lost: 'sudo rpm -ivh --oldpackage --replacefiles --replacepkgs /tmp/kernel.x86_64.rpm' |
||||||||||||||
fail | 3322963 | 2018-12-10 05:20:33 | 2018-12-23 07:17:17 | 2018-12-23 08:03:17 | 0:46:00 | ovh | master | ubuntu | 18.04 | kcephfs/thrash/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} thrashers/mon.yaml thrashosds-health.yaml whitelist_health.yaml workloads/kclient_workunit_suites_ffsb.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh098.front.sepia.ceph.com |
||||||||||||||
fail | 3322964 | 2018-12-10 05:20:34 | 2018-12-23 07:18:35 | 2018-12-23 08:30:35 | 1:12:00 | 0:07:37 | 1:04:23 | ovh | master | rhel | 7.5 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/kclient_workunit_suites_fsync.yaml} | 3 | |
Failure Reason:
Command failed on ovh060 with status 1: '\n sudo yum -y install ceph-radosgw\n ' |
||||||||||||||
pass | 3322965 | 2018-12-10 05:20:35 | 2018-12-23 07:24:38 | 2018-12-23 14:22:44 | 6:58:06 | 0:41:59 | 6:16:07 | ovh | master | ubuntu | 16.04 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/data-scan.yaml whitelist_health.yaml} | 6 | |
fail | 3322966 | 2018-12-10 05:20:35 | 2018-12-23 07:26:03 | 2018-12-23 08:06:03 | 0:40:00 | 0:08:34 | 0:31:26 | ovh | master | centos | 7.4 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/kclient_workunit_suites_iozone.yaml} | 3 | |
Failure Reason:
Command failed on ovh037 with status 1: '\n sudo yum -y install ceph-radosgw\n ' |
||||||||||||||
pass | 3322967 | 2018-12-10 05:20:36 | 2018-12-23 07:45:03 | 2018-12-23 08:25:03 | 0:40:00 | 0:17:46 | 0:22:14 | ovh | master | centos | 7.4 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/kclient_workunit_suites_pjd.yaml} | 3 | |
fail | 3322968 | 2018-12-10 05:20:37 | 2018-12-23 07:45:22 | 2018-12-23 18:47:32 | 11:02:10 | 0:11:55 | 10:50:15 | ovh | master | rhel | 7.5 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/failover.yaml whitelist_health.yaml} | 6 | |
Failure Reason:
Command failed on ovh060 with status 1: '\n sudo yum -y install ceph-radosgw\n ' |
||||||||||||||
pass | 3322969 | 2018-12-10 05:20:37 | 2018-12-23 07:51:15 | 2018-12-23 08:45:15 | 0:54:00 | 0:16:22 | 0:37:38 | ovh | master | ubuntu | 16.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/kclient_workunit_trivial_sync.yaml} | 3 | |
pass | 3322970 | 2018-12-10 05:20:38 | 2018-12-23 07:54:17 | 2018-12-23 08:42:17 | 0:48:00 | 0:20:43 | 0:27:17 | ovh | master | centos | 7.4 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/kclient_workunit_direct_io.yaml} | 3 | |
pass | 3322971 | 2018-12-10 05:20:39 | 2018-12-23 07:54:17 | 2018-12-23 10:32:19 | 2:38:02 | 0:58:12 | 1:39:50 | ovh | master | ubuntu | 16.04 | kcephfs/mixed-clients/{begin.yaml clusters/1-mds-2-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/kernel_cfuse_workunits_untarbuild_blogbench.yaml} | 4 | |
dead | 3322972 | 2018-12-10 05:20:39 | 2018-12-23 07:58:03 | 2018-12-23 18:44:12 | 10:46:09 | ovh | master | rhel | 7.5 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/forward-scrub.yaml whitelist_health.yaml} | 6 | |||
Failure Reason:
SSH connection to ovh072 was lost: 'sudo rpm -ivh --oldpackage --replacefiles --replacepkgs /tmp/kernel.x86_64.rpm' |
||||||||||||||
fail | 3322973 | 2018-12-10 05:20:40 | 2018-12-23 07:58:14 | 2018-12-23 09:14:15 | 1:16:01 | 1:05:49 | 0:10:12 | ovh | master | ubuntu | 16.04 | kcephfs/thrash/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} thrashers/default.yaml thrashosds-health.yaml whitelist_health.yaml workloads/kclient_workunit_suites_iozone.yaml} | 3 | |
Failure Reason:
"2018-12-23 08:32:16.269761 mon.a mon.0 158.69.69.208:6789/0 1738 : cluster [WRN] Health check failed: 1 MDSs report slow metadata IOs (MDS_SLOW_METADATA_IO)" in cluster log |
||||||||||||||
pass | 3322974 | 2018-12-10 05:20:41 | 2018-12-23 08:02:24 | 2018-12-23 09:30:24 | 1:28:00 | 0:50:55 | 0:37:05 | ovh | master | centos | 7.4 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/kclient_workunit_kernel_untar_build.yaml} | 3 | |
dead | 3322975 | 2018-12-10 05:20:41 | 2018-12-23 08:03:18 | 2018-12-23 20:10:36 | 12:07:18 | ovh | master | ubuntu | 16.04 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/journal-repair.yaml whitelist_health.yaml} | 6 | |||
fail | 3322976 | 2018-12-10 05:20:42 | 2018-12-23 08:06:04 | 2018-12-23 09:16:04 | 1:10:00 | 0:07:06 | 1:02:54 | ovh | master | rhel | 7.5 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/kclient_workunit_misc.yaml} | 3 | |
Failure Reason:
Command failed on ovh050 with status 1: '\n sudo yum -y install ceph-radosgw\n ' |
||||||||||||||
pass | 3322977 | 2018-12-10 05:20:43 | 2018-12-23 08:06:04 | 2018-12-23 08:58:04 | 0:52:00 | 0:27:24 | 0:24:36 | ovh | master | ubuntu | 16.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/kclient_workunit_o_trunc.yaml} | 3 | |
pass | 3322978 | 2018-12-10 05:20:43 | 2018-12-23 08:16:24 | 2018-12-23 14:00:29 | 5:44:05 | 0:20:49 | 5:23:16 | ovh | master | centos | 7.4 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/mds-flush.yaml whitelist_health.yaml} | 6 | |
fail | 3322979 | 2018-12-10 05:20:44 | 2018-12-23 08:23:00 | 2018-12-23 10:15:01 | 1:52:01 | 1:33:16 | 0:18:45 | ovh | master | centos | 7.4 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/kclient_workunit_snaps.yaml} | 3 | |
Failure Reason:
"2018-12-23 09:38:11.028263 mon.b mon.0 158.69.65.118:6789/0 396 : cluster [ERR] Health check failed: mon a is very low on available space (MON_DISK_CRIT)" in cluster log |
||||||||||||||
fail | 3322980 | 2018-12-10 05:20:45 | 2018-12-23 08:25:15 | 2018-12-23 09:27:15 | 1:02:00 | 0:08:17 | 0:53:43 | ovh | master | rhel | 7.5 | kcephfs/thrash/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} thrashers/mds.yaml thrashosds-health.yaml whitelist_health.yaml workloads/kclient_workunit_suites_ffsb.yaml} | 3 | |
Failure Reason:
Command failed on ovh089 with status 1: '\n sudo yum -y install ceph-radosgw\n ' |
||||||||||||||
fail | 3322981 | 2018-12-10 05:20:45 | 2018-12-23 08:28:22 | 2018-12-23 12:10:24 | 3:42:02 | 0:12:23 | 3:29:39 | ovh | master | rhel | 7.5 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/mds-full.yaml whitelist_health.yaml} | 6 | |
Failure Reason:
Command failed on ovh097 with status 1: '\n sudo yum -y install ceph-radosgw\n ' |
||||||||||||||
fail | 3322982 | 2018-12-10 05:20:46 | 2018-12-23 08:30:46 | 2018-12-23 09:28:46 | 0:58:00 | 0:07:00 | 0:51:00 | ovh | master | rhel | 7.5 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/kclient_workunit_suites_dbench.yaml} | 3 | |
Failure Reason:
Command failed on ovh060 with status 1: '\n sudo yum -y install ceph-radosgw\n ' |
||||||||||||||
fail | 3322983 | 2018-12-10 05:20:47 | 2018-12-23 08:32:04 | 2018-12-23 09:04:03 | 0:31:59 | ovh | master | ubuntu | 18.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/kclient_workunit_suites_ffsb.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh004.front.sepia.ceph.com |
||||||||||||||
fail | 3322984 | 2018-12-10 05:20:47 | 2018-12-23 08:33:27 | 2018-12-23 13:01:31 | 4:28:04 | 0:09:18 | 4:18:46 | ovh | master | rhel | 7.5 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/pool-perm.yaml whitelist_health.yaml} | 6 | |
Failure Reason:
Command failed on ovh033 with status 1: '\n sudo yum -y install ceph-radosgw\n ' |
||||||||||||||
pass | 3322985 | 2018-12-10 05:20:48 | 2018-12-23 08:36:06 | 2018-12-23 09:18:06 | 0:42:00 | 0:29:23 | 0:12:37 | ovh | master | centos | 7.4 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/kclient_workunit_suites_fsstress.yaml} | 3 | |
fail | 3322986 | 2018-12-10 05:20:49 | 2018-12-23 08:38:50 | 2018-12-23 09:46:50 | 1:08:00 | 0:10:53 | 0:57:07 | ovh | master | rhel | 7.5 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/kclient_workunit_suites_fsx.yaml} | 3 | |
Failure Reason:
Command failed on ovh066 with status 1: '\n sudo yum -y install ceph-radosgw\n ' |
||||||||||||||
fail | 3322987 | 2018-12-10 05:20:49 | 2018-12-23 08:42:19 | 2018-12-23 16:14:25 | 7:32:06 | ovh | master | ubuntu | 18.04 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/sessionmap.yaml whitelist_health.yaml} | 6 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh097.front.sepia.ceph.com |
||||||||||||||
pass | 3322988 | 2018-12-10 05:20:50 | 2018-12-23 08:45:26 | 2018-12-23 10:47:27 | 2:02:01 | 1:01:25 | 1:00:36 | ovh | master | centos | 7.4 | kcephfs/thrash/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} thrashers/mon.yaml thrashosds-health.yaml whitelist_health.yaml workloads/kclient_workunit_suites_iozone.yaml} | 3 | |
fail | 3322989 | 2018-12-10 05:20:51 | 2018-12-23 08:45:58 | 2018-12-23 09:45:58 | 1:00:00 | 0:07:49 | 0:52:11 | ovh | master | rhel | 7.5 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/kclient_workunit_suites_fsync.yaml} | 3 | |
Failure Reason:
Command failed on ovh003 with status 1: '\n sudo yum -y install ceph-radosgw\n ' |
||||||||||||||
pass | 3322990 | 2018-12-10 05:20:51 | 2018-12-23 08:54:06 | 2018-12-23 12:22:09 | 3:28:03 | 0:45:56 | 2:42:07 | ovh | master | ubuntu | 16.04 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/strays.yaml whitelist_health.yaml} | 6 | |
fail | 3322991 | 2018-12-10 05:20:52 | 2018-12-23 08:55:29 | 2018-12-23 10:25:30 | 1:30:01 | 0:53:11 | 0:36:50 | ovh | master | centos | 7.4 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/kclient_workunit_suites_iozone.yaml} | 3 | |
Failure Reason:
"2018-12-23 09:48:01.557693 mon.b mon.0 158.69.65.101:6789/0 146 : cluster [WRN] Health check failed: 1 MDSs report slow metadata IOs (MDS_SLOW_METADATA_IO)" in cluster log |
||||||||||||||
fail | 3322992 | 2018-12-10 05:20:53 | 2018-12-23 08:58:05 | 2018-12-23 10:04:06 | 1:06:01 | 0:07:33 | 0:58:28 | ovh | master | rhel | 7.5 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/kclient_workunit_suites_pjd.yaml} | 3 | |
Failure Reason:
Command failed on ovh036 with status 1: '\n sudo yum -y install ceph-radosgw\n ' |
||||||||||||||
fail | 3322993 | 2018-12-10 05:20:53 | 2018-12-23 09:04:14 | 2018-12-23 14:12:18 | 5:08:04 | 0:09:46 | 4:58:18 | ovh | master | rhel | 7.5 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/volume-client.yaml whitelist_health.yaml} | 6 | |
Failure Reason:
Command failed on ovh087 with status 1: '\n sudo yum -y install ceph-radosgw\n ' |
||||||||||||||
fail | 3322994 | 2018-12-10 05:20:54 | 2018-12-23 09:14:06 | 2018-12-23 10:06:06 | 0:52:00 | 0:08:25 | 0:43:35 | ovh | master | rhel | 7.5 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/kclient_workunit_trivial_sync.yaml} | 3 | |
Failure Reason:
Command failed on ovh031 with status 1: '\n sudo yum -y install ceph-radosgw\n ' |
||||||||||||||
pass | 3322995 | 2018-12-10 05:20:55 | 2018-12-23 09:14:16 | 2018-12-23 09:58:16 | 0:44:00 | 0:17:52 | 0:26:08 | ovh | master | centos | 7.4 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/kclient_workunit_direct_io.yaml} | 3 | |
pass | 3322996 | 2018-12-10 05:20:55 | 2018-12-23 09:16:06 | 2018-12-23 11:28:07 | 2:12:01 | 1:29:43 | 0:42:18 | ovh | master | centos | 7.4 | kcephfs/mixed-clients/{begin.yaml clusters/1-mds-2-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/kernel_cfuse_workunits_dbench_iozone.yaml} | 4 | |
fail | 3322997 | 2018-12-10 05:20:56 | 2018-12-23 09:18:08 | 2018-12-23 10:32:08 | 1:14:00 | ovh | master | ubuntu | 18.04 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/auto-repair.yaml whitelist_health.yaml} | 6 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh074.front.sepia.ceph.com |
||||||||||||||
fail | 3322998 | 2018-12-10 05:20:57 | 2018-12-23 09:22:03 | 2018-12-23 09:40:02 | 0:17:59 | ovh | master | ubuntu | 18.04 | kcephfs/thrash/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} thrashers/default.yaml thrashosds-health.yaml whitelist_health.yaml workloads/kclient_workunit_suites_ffsb.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh067.front.sepia.ceph.com |
||||||||||||||
fail | 3322999 | 2018-12-10 05:20:57 | 2018-12-23 09:22:34 | 2018-12-23 09:40:33 | 0:17:59 | ovh | master | ubuntu | 18.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/kclient_workunit_kernel_untar_build.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh069.front.sepia.ceph.com |
||||||||||||||
pass | 3323000 | 2018-12-10 05:20:58 | 2018-12-23 09:23:45 | 2018-12-23 11:31:46 | 2:08:01 | 0:21:43 | 1:46:18 | ovh | master | ubuntu | 16.04 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/backtrace.yaml whitelist_health.yaml} | 6 | |
pass | 3323001 | 2018-12-10 05:20:59 | 2018-12-23 09:27:26 | 2018-12-23 11:23:27 | 1:56:01 | 1:26:12 | 0:29:49 | ovh | master | ubuntu | 18.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/kclient_workunit_misc.yaml} | 3 | |
pass | 3323002 | 2018-12-10 05:20:59 | 2018-12-23 09:28:58 | 2018-12-23 10:34:58 | 1:06:00 | 0:33:44 | 0:32:16 | ovh | master | centos | 7.4 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/kclient_workunit_o_trunc.yaml} | 3 | |
fail | 3323003 | 2018-12-10 05:21:00 | 2018-12-23 09:30:36 | 2018-12-23 12:36:39 | 3:06:03 | 0:11:52 | 2:54:11 | ovh | master | rhel | 7.5 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/client-limits.yaml whitelist_health.yaml} | 6 | |
Failure Reason:
Command failed on ovh029 with status 1: '\n sudo yum -y install ceph-radosgw\n ' |
||||||||||||||
fail | 3323004 | 2018-12-10 05:21:01 | 2018-12-23 09:31:57 | 2018-12-23 09:51:56 | 0:19:59 | ovh | master | ubuntu | 18.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/kclient_workunit_snaps.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh004.front.sepia.ceph.com |
||||||||||||||
fail | 3323005 | 2018-12-10 05:21:01 | 2018-12-23 09:39:46 | 2018-12-23 09:57:45 | 0:17:59 | ovh | master | ubuntu | 18.04 | kcephfs/thrash/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} thrashers/mds.yaml thrashosds-health.yaml whitelist_health.yaml workloads/kclient_workunit_suites_iozone.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh052.front.sepia.ceph.com |
||||||||||||||
dead | 3323006 | 2018-12-10 05:21:02 | 2018-12-23 09:40:03 | 2018-12-23 21:42:14 | 12:02:11 | ovh | master | ubuntu | 18.04 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/client-recovery.yaml whitelist_health.yaml} | — | |||
pass | 3323007 | 2018-12-10 05:21:03 | 2018-12-23 09:40:34 | 2018-12-23 10:50:35 | 1:10:01 | 0:45:05 | 0:24:56 | ovh | master | ubuntu | 16.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/kclient_workunit_suites_dbench.yaml} | 3 | |
fail | 3323008 | 2018-12-10 05:21:03 | 2018-12-23 09:45:41 | 2018-12-23 11:11:41 | 1:26:00 | 1:13:45 | 0:12:15 | ovh | master | ubuntu | 16.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/kclient_workunit_suites_ffsb.yaml} | 3 | |
Failure Reason:
"2018-12-23 10:16:11.502604 mon.b mon.0 158.69.67.110:6789/0 143 : cluster [WRN] Health check failed: 1 MDSs report slow metadata IOs (MDS_SLOW_METADATA_IO)" in cluster log |
||||||||||||||
fail | 3323009 | 2018-12-10 05:21:04 | 2018-12-23 09:46:00 | 2018-12-23 11:36:00 | 1:50:00 | ovh | master | ubuntu | 18.04 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/config-commands.yaml whitelist_health.yaml} | 6 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh052.front.sepia.ceph.com |
||||||||||||||
pass | 3323010 | 2018-12-10 05:21:05 | 2018-12-23 09:46:00 | 2018-12-23 10:32:00 | 0:46:00 | 0:26:14 | 0:19:46 | ovh | master | centos | 7.4 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/kclient_workunit_suites_fsstress.yaml} | 3 | |
pass | 3323011 | 2018-12-10 05:21:05 | 2018-12-23 09:47:02 | 2018-12-23 10:57:03 | 1:10:01 | 0:36:55 | 0:33:06 | ovh | master | centos | 7.4 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/kclient_workunit_suites_fsx.yaml} | 3 | |
fail | 3323012 | 2018-12-10 05:21:06 | 2018-12-23 09:51:51 | 2018-12-23 13:35:54 | 3:44:03 | 0:11:46 | 3:32:17 | ovh | master | rhel | 7.5 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/damage.yaml whitelist_health.yaml} | 6 | |
Failure Reason:
Command failed on ovh085 with status 1: '\n sudo yum -y install ceph-radosgw\n ' |
||||||||||||||
fail | 3323013 | 2018-12-10 05:21:07 | 2018-12-23 09:51:57 | 2018-12-23 11:07:58 | 1:16:01 | 0:46:03 | 0:29:58 | ovh | master | centos | 7.4 | kcephfs/thrash/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} thrashers/mon.yaml thrashosds-health.yaml whitelist_health.yaml workloads/kclient_workunit_suites_ffsb.yaml} | 3 | |
Failure Reason:
"2018-12-23 10:36:29.113918 mon.b mon.0 158.69.69.16:6789/0 47 : cluster [WRN] Health check failed: 1 MDSs report slow metadata IOs (MDS_SLOW_METADATA_IO)" in cluster log |
||||||||||||||
fail | 3323014 | 2018-12-10 05:21:07 | 2018-12-23 09:57:47 | 2018-12-23 10:11:47 | 0:14:00 | ovh | master | ubuntu | 18.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/kclient_workunit_suites_fsync.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh023.front.sepia.ceph.com |
||||||||||||||
pass | 3323015 | 2018-12-10 05:21:08 | 2018-12-23 09:58:17 | 2018-12-23 14:18:20 | 4:20:03 | 0:52:06 | 3:27:57 | ovh | master | centos | 7.4 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/data-scan.yaml whitelist_health.yaml} | 6 | |
pass | 3323016 | 2018-12-10 05:21:09 | 2018-12-23 10:04:18 | 2018-12-23 10:46:17 | 0:41:59 | 0:32:28 | 0:09:31 | ovh | master | ubuntu | 16.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/kclient_workunit_suites_iozone.yaml} | 3 | |
pass | 3323017 | 2018-12-10 05:21:09 | 2018-12-23 10:06:08 | 2018-12-23 10:40:07 | 0:33:59 | 0:19:15 | 0:14:44 | ovh | master | centos | 7.4 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/kclient_workunit_suites_pjd.yaml} | 3 | |
pass | 3323018 | 2018-12-10 05:21:10 | 2018-12-23 10:08:30 | 2018-12-23 14:02:33 | 3:54:03 | 0:48:16 | 3:05:47 | ovh | master | centos | 7.4 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/failover.yaml whitelist_health.yaml} | 6 | |
pass | 3323019 | 2018-12-10 05:21:11 | 2018-12-23 10:11:52 | 2018-12-23 10:59:52 | 0:48:00 | 0:17:04 | 0:30:56 | ovh | master | centos | 7.4 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/kclient_workunit_trivial_sync.yaml} | 3 | |
pass | 3323020 | 2018-12-10 05:21:11 | 2018-12-23 10:15:13 | 2018-12-23 10:59:13 | 0:44:00 | 0:18:15 | 0:25:45 | ovh | master | centos | 7.4 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/kclient_workunit_direct_io.yaml} | 3 | |
fail | 3323021 | 2018-12-10 05:21:12 | 2018-12-23 10:25:31 | 2018-12-23 11:13:31 | 0:48:00 | ovh | master | ubuntu | 18.04 | kcephfs/mixed-clients/{begin.yaml clusters/1-mds-2-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/kernel_cfuse_workunits_untarbuild_blogbench.yaml} | 4 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh072.front.sepia.ceph.com |
||||||||||||||
pass | 3323022 | 2018-12-10 05:21:13 | 2018-12-23 10:32:09 | 2018-12-23 14:20:12 | 3:48:03 | 0:23:57 | 3:24:06 | ovh | master | ubuntu | 16.04 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/forward-scrub.yaml whitelist_health.yaml} | 6 | |
pass | 3323023 | 2018-12-10 05:21:13 | 2018-12-23 10:32:09 | 2018-12-23 11:14:09 | 0:42:00 | 0:33:55 | 0:08:05 | ovh | master | ubuntu | 16.04 | kcephfs/thrash/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} thrashers/default.yaml thrashosds-health.yaml whitelist_health.yaml workloads/kclient_workunit_suites_iozone.yaml} | 3 | |
pass | 3323024 | 2018-12-10 05:21:14 | 2018-12-23 10:32:15 | 2018-12-23 12:12:16 | 1:40:01 | 1:03:56 | 0:36:05 | ovh | master | centos | 7.4 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/kclient_workunit_kernel_untar_build.yaml} | 3 | |
dead | 3323025 | 2018-12-10 05:21:15 | 2018-12-23 10:32:20 | 2018-12-23 22:34:32 | 12:02:12 | ovh | master | centos | 7.4 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/journal-repair.yaml whitelist_health.yaml} | — | |||
pass | 3323026 | 2018-12-10 05:21:15 | 2018-12-23 10:33:56 | 2018-12-23 11:55:56 | 1:22:00 | 0:54:54 | 0:27:06 | ovh | master | centos | 7.4 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/kclient_workunit_misc.yaml} | 3 | |
fail | 3323027 | 2018-12-10 05:21:16 | 2018-12-23 10:35:11 | 2018-12-23 11:55:11 | 1:20:00 | 0:08:47 | 1:11:13 | ovh | master | rhel | 7.5 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/kclient_workunit_o_trunc.yaml} | 3 | |
Failure Reason:
Command failed on ovh034 with status 1: '\n sudo yum -y install ceph-radosgw\n ' |
||||||||||||||
fail | 3323028 | 2018-12-10 05:21:17 | 2018-12-23 10:37:03 | 2018-12-23 14:35:05 | 3:58:02 | ovh | master | ubuntu | 18.04 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/mds-flush.yaml whitelist_health.yaml} | 6 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh056.front.sepia.ceph.com |
||||||||||||||
pass | 3323029 | 2018-12-10 05:21:17 | 2018-12-23 10:40:09 | 2018-12-23 12:30:10 | 1:50:01 | 1:31:28 | 0:18:33 | ovh | master | ubuntu | 16.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/kclient_workunit_snaps.yaml} | 3 | |
fail | 3323030 | 2018-12-10 05:21:18 | 2018-12-23 10:46:19 | 2018-12-23 11:08:19 | 0:22:00 | ovh | master | ubuntu | 18.04 | kcephfs/thrash/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} thrashers/mds.yaml thrashosds-health.yaml whitelist_health.yaml workloads/kclient_workunit_suites_ffsb.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh052.front.sepia.ceph.com |
||||||||||||||
pass | 3323031 | 2018-12-10 05:21:19 | 2018-12-23 10:47:39 | 2018-12-23 13:11:41 | 2:24:02 | 0:45:13 | 1:38:49 | ovh | master | ubuntu | 16.04 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/mds-full.yaml whitelist_health.yaml} | 6 | |
fail | 3323032 | 2018-12-10 05:21:19 | 2018-12-23 10:50:37 | 2018-12-23 11:14:36 | 0:23:59 | ovh | master | ubuntu | 18.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/kclient_workunit_suites_dbench.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh090.front.sepia.ceph.com |
||||||||||||||
fail | 3323033 | 2018-12-10 05:21:20 | 2018-12-23 10:57:15 | 2018-12-23 11:11:14 | 0:13:59 | ovh | master | ubuntu | 18.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/kclient_workunit_suites_ffsb.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh058.front.sepia.ceph.com |
||||||||||||||
pass | 3323034 | 2018-12-10 05:21:21 | 2018-12-23 10:59:15 | 2018-12-23 17:13:20 | 6:14:05 | 0:22:31 | 5:51:34 | ovh | master | centos | 7.4 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/pool-perm.yaml whitelist_health.yaml} | 6 | |
pass | 3323035 | 2018-12-10 05:21:21 | 2018-12-23 11:00:08 | 2018-12-23 11:48:08 | 0:48:00 | 0:26:57 | 0:21:03 | ovh | master | centos | 7.4 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/kclient_workunit_suites_fsstress.yaml} | 3 | |
pass | 3323036 | 2018-12-10 05:21:22 | 2018-12-23 11:02:48 | 2018-12-23 12:38:48 | 1:36:00 | 1:14:05 | 0:21:55 | ovh | master | ubuntu | 16.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/kclient_workunit_suites_fsx.yaml} | 3 | |
fail | 3323037 | 2018-12-10 05:21:23 | 2018-12-23 11:08:11 | 2018-12-23 22:38:21 | 11:30:10 | 0:12:39 | 11:17:31 | ovh | master | rhel | 7.5 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/sessionmap.yaml whitelist_health.yaml} | 6 | |
Failure Reason:
Command failed on ovh064 with status 1: '\n sudo yum -y install ceph-radosgw\n ' |
||||||||||||||
pass | 3323038 | 2018-12-10 05:21:23 | 2018-12-23 11:08:20 | 2018-12-23 12:34:20 | 1:26:00 | 1:01:48 | 0:24:12 | ovh | master | centos | 7.4 | kcephfs/thrash/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} thrashers/mon.yaml thrashosds-health.yaml whitelist_health.yaml workloads/kclient_workunit_suites_iozone.yaml} | 3 | |
pass | 3323039 | 2018-12-10 05:21:24 | 2018-12-23 11:11:27 | 2018-12-23 11:47:26 | 0:35:59 | 0:17:04 | 0:18:55 | ovh | master | centos | 7.4 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/kclient_workunit_suites_fsync.yaml} | 3 | |
fail | 3323040 | 2018-12-10 05:21:25 | 2018-12-23 11:11:42 | 2018-12-23 20:11:50 | 9:00:08 | 0:09:31 | 8:50:37 | ovh | master | rhel | 7.5 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/strays.yaml whitelist_health.yaml} | 6 | |
Failure Reason:
Command failed on ovh065 with status 1: '\n sudo yum -y install ceph-radosgw\n ' |
||||||||||||||
pass | 3323041 | 2018-12-10 05:21:25 | 2018-12-23 11:13:43 | 2018-12-23 12:13:43 | 1:00:00 | 0:32:24 | 0:27:36 | ovh | master | centos | 7.4 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/kclient_workunit_suites_iozone.yaml} | 3 | |
pass | 3323042 | 2018-12-10 05:21:26 | 2018-12-23 11:14:10 | 2018-12-23 11:50:10 | 0:36:00 | 0:18:41 | 0:17:19 | ovh | master | ubuntu | 16.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/kclient_workunit_suites_pjd.yaml} | 3 | |
fail | 3323043 | 2018-12-10 05:21:27 | 2018-12-23 11:14:37 | 2018-12-23 16:18:41 | 5:04:04 | 0:12:35 | 4:51:29 | ovh | master | rhel | 7.5 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/volume-client.yaml whitelist_health.yaml} | 6 | |
Failure Reason:
Command failed on ovh093 with status 1: '\n sudo yum -y install ceph-radosgw\n ' |
||||||||||||||
pass | 3323044 | 2018-12-10 05:21:28 | 2018-12-23 11:23:39 | 2018-12-23 12:13:39 | 0:50:00 | 0:16:18 | 0:33:42 | ovh | master | centos | 7.4 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/kclient_workunit_trivial_sync.yaml} | 3 | |
pass | 3323045 | 2018-12-10 05:21:28 | 2018-12-23 11:28:17 | 2018-12-23 11:56:17 | 0:28:00 | 0:19:38 | 0:08:22 | ovh | master | ubuntu | 16.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/kclient_workunit_direct_io.yaml} | 3 | |
pass | 3323046 | 2018-12-10 05:21:29 | 2018-12-23 11:31:59 | 2018-12-23 12:43:59 | 1:12:00 | 0:43:15 | 0:28:45 | ovh | master | ubuntu | 16.04 | kcephfs/mixed-clients/{begin.yaml clusters/1-mds-2-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/kernel_cfuse_workunits_dbench_iozone.yaml} | 4 | |
fail | 3323047 | 2018-12-10 05:21:30 | 2018-12-23 11:36:02 | 2018-12-23 19:52:09 | 8:16:07 | ovh | master | ubuntu | 16.04 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/auto-repair.yaml whitelist_health.yaml} | 6 | |||
fail | 3323048 | 2018-12-10 05:21:30 | 2018-12-23 11:47:40 | 2018-12-23 12:41:40 | 0:54:00 | 0:07:01 | 0:46:59 | ovh | master | rhel | 7.5 | kcephfs/thrash/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} thrashers/default.yaml thrashosds-health.yaml whitelist_health.yaml workloads/kclient_workunit_suites_ffsb.yaml} | 3 | |
Failure Reason:
Command failed on ovh082 with status 1: '\n sudo yum -y install ceph-radosgw\n ' |
||||||||||||||
fail | 3323049 | 2018-12-10 05:21:31 | 2018-12-23 11:48:09 | 2018-12-23 12:50:09 | 1:02:00 | 0:07:28 | 0:54:32 | ovh | master | rhel | 7.5 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/kclient_workunit_kernel_untar_build.yaml} | 3 | |
Failure Reason:
Command failed on ovh092 with status 1: '\n sudo yum -y install ceph-radosgw\n ' |
||||||||||||||
pass | 3323050 | 2018-12-10 05:21:32 | 2018-12-23 11:50:12 | 2018-12-23 15:42:14 | 3:52:02 | 0:20:56 | 3:31:06 | ovh | master | centos | 7.4 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/backtrace.yaml whitelist_health.yaml} | 6 | |
pass | 3323051 | 2018-12-10 05:21:33 | 2018-12-23 11:55:24 | 2018-12-23 13:05:24 | 1:10:00 | 0:51:51 | 0:18:09 | ovh | master | ubuntu | 16.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/kclient_workunit_misc.yaml} | 3 | |
fail | 3323052 | 2018-12-10 05:21:33 | 2018-12-23 11:55:58 | 2018-12-23 12:09:57 | 0:13:59 | ovh | master | ubuntu | 18.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/kclient_workunit_o_trunc.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh031.front.sepia.ceph.com |
||||||||||||||
pass | 3323053 | 2018-12-10 05:21:34 | 2018-12-23 11:56:18 | 2018-12-23 18:18:24 | 6:22:06 | 0:38:33 | 5:43:33 | ovh | master | centos | 7.4 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/client-limits.yaml whitelist_health.yaml} | 6 | |
fail | 3323054 | 2018-12-10 05:21:35 | 2018-12-23 12:09:59 | 2018-12-23 12:57:59 | 0:48:00 | ovh | master | ubuntu | 18.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/kclient_workunit_snaps.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh091.front.sepia.ceph.com |
||||||||||||||
pass | 3323055 | 2018-12-10 05:21:35 | 2018-12-23 12:10:26 | 2018-12-23 12:52:25 | 0:41:59 | 0:31:42 | 0:10:17 | ovh | master | ubuntu | 16.04 | kcephfs/thrash/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} thrashers/mds.yaml thrashosds-health.yaml whitelist_health.yaml workloads/kclient_workunit_suites_iozone.yaml} | 3 | |
fail | 3323056 | 2018-12-10 05:21:36 | 2018-12-23 12:12:18 | 2018-12-23 14:26:19 | 2:14:01 | ovh | master | ubuntu | 18.04 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/client-recovery.yaml whitelist_health.yaml} | 6 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh098.front.sepia.ceph.com |
||||||||||||||
pass | 3323057 | 2018-12-10 05:21:37 | 2018-12-23 12:13:52 | 2018-12-23 13:07:52 | 0:54:00 | 0:45:48 | 0:08:12 | ovh | master | ubuntu | 16.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/kclient_workunit_suites_dbench.yaml} | 3 | |
fail | 3323058 | 2018-12-10 05:21:37 | 2018-12-23 12:13:52 | 2018-12-23 13:21:52 | 1:08:00 | 0:38:35 | 0:29:25 | ovh | master | ubuntu | 16.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/kclient_workunit_suites_ffsb.yaml} | 3 | |
Failure Reason:
"2018-12-23 12:55:14.926523 mon.a mon.0 158.69.65.242:6789/0 148 : cluster [WRN] Health check failed: 1 MDSs report slow metadata IOs (MDS_SLOW_METADATA_IO)" in cluster log |
||||||||||||||
fail | 3323059 | 2018-12-10 05:21:38 | 2018-12-23 12:22:11 | 2018-12-23 18:06:16 | 5:44:05 | 0:09:51 | 5:34:14 | ovh | master | rhel | 7.5 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/config-commands.yaml whitelist_health.yaml} | 6 | |
Failure Reason:
Command failed on ovh016 with status 1: '\n sudo yum -y install ceph-radosgw\n ' |
||||||||||||||
pass | 3323060 | 2018-12-10 05:21:39 | 2018-12-23 12:30:18 | 2018-12-23 13:20:18 | 0:50:00 | 0:25:12 | 0:24:48 | ovh | master | ubuntu | 16.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/kclient_workunit_suites_fsstress.yaml} | 3 | |
fail | 3323061 | 2018-12-10 05:21:39 | 2018-12-23 12:34:22 | 2018-12-23 13:04:22 | 0:30:00 | ovh | master | ubuntu | 18.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/kclient_workunit_suites_fsx.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh080.front.sepia.ceph.com |
||||||||||||||
pass | 3323062 | 2018-12-10 05:21:40 | 2018-12-23 12:36:40 | 2018-12-23 15:44:42 | 3:08:02 | 0:34:11 | 2:33:51 | ovh | master | centos | 7.4 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/damage.yaml whitelist_health.yaml} | 6 | |
pass | 3323063 | 2018-12-10 05:21:41 | 2018-12-23 12:38:50 | 2018-12-23 13:48:51 | 1:10:01 | 0:36:36 | 0:33:25 | ovh | master | centos | 7.4 | kcephfs/thrash/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} thrashers/mon.yaml thrashosds-health.yaml whitelist_health.yaml workloads/kclient_workunit_suites_ffsb.yaml} | 3 | |
fail | 3323064 | 2018-12-10 05:21:41 | 2018-12-23 12:41:53 | 2018-12-23 13:39:52 | 0:57:59 | 0:07:02 | 0:50:57 | ovh | master | rhel | 7.5 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/kclient_workunit_suites_fsync.yaml} | 3 | |
Failure Reason:
Command failed on ovh048 with status 1: '\n sudo yum -y install ceph-radosgw\n ' |
||||||||||||||
fail | 3323065 | 2018-12-10 05:21:42 | 2018-12-23 12:44:11 | 2018-12-23 14:50:12 | 2:06:01 | 0:09:28 | 1:56:33 | ovh | master | rhel | 7.5 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/data-scan.yaml whitelist_health.yaml} | 6 | |
Failure Reason:
Command failed on ovh065 with status 1: '\n sudo yum -y install ceph-radosgw\n ' |
||||||||||||||
fail | 3323066 | 2018-12-10 05:21:43 | 2018-12-23 12:50:11 | 2018-12-23 13:48:11 | 0:58:00 | 0:07:34 | 0:50:26 | ovh | master | rhel | 7.5 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/kclient_workunit_suites_iozone.yaml} | 3 | |
Failure Reason:
Command failed on ovh012 with status 1: '\n sudo yum -y install ceph-radosgw\n ' |
||||||||||||||
fail | 3323067 | 2018-12-10 05:21:43 | 2018-12-23 12:52:27 | 2018-12-23 13:14:27 | 0:22:00 | ovh | master | ubuntu | 18.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/kclient_workunit_suites_pjd.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh033.front.sepia.ceph.com |
||||||||||||||
dead | 3323068 | 2018-12-10 05:21:44 | 2018-12-23 12:58:00 | 2018-12-24 01:00:12 | 12:02:12 | ovh | master | centos | 7.4 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/failover.yaml whitelist_health.yaml} | — | |||
pass | 3323069 | 2018-12-10 05:21:45 | 2018-12-23 13:01:43 | 2018-12-23 13:45:43 | 0:44:00 | 0:16:37 | 0:27:23 | ovh | master | centos | 7.4 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/kclient_workunit_trivial_sync.yaml} | 3 | |
fail | 3323070 | 2018-12-10 05:21:45 | 2018-12-23 13:04:24 | 2018-12-23 13:26:23 | 0:21:59 | ovh | master | ubuntu | 18.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/kclient_workunit_direct_io.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh100.front.sepia.ceph.com |
||||||||||||||
fail | 3323071 | 2018-12-10 05:21:46 | 2018-12-23 13:05:37 | 2018-12-23 14:31:37 | 1:26:00 | 0:07:28 | 1:18:32 | ovh | master | rhel | 7.5 | kcephfs/mixed-clients/{begin.yaml clusters/1-mds-2-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/kernel_cfuse_workunits_untarbuild_blogbench.yaml} | 4 | |
Failure Reason:
Command failed on ovh001 with status 1: '\n sudo yum -y install ceph-radosgw\n ' |
||||||||||||||
dead | 3323072 | 2018-12-10 05:21:47 | 2018-12-23 13:08:04 | 2018-12-24 01:10:15 | 12:02:11 | ovh | master | ubuntu | 18.04 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/forward-scrub.yaml whitelist_health.yaml} | — | |||
fail | 3323073 | 2018-12-10 05:21:47 | 2018-12-23 13:11:54 | 2018-12-23 13:33:53 | 0:21:59 | ovh | master | ubuntu | 18.04 | kcephfs/thrash/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} thrashers/default.yaml thrashosds-health.yaml whitelist_health.yaml workloads/kclient_workunit_suites_iozone.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh022.front.sepia.ceph.com |
||||||||||||||
fail | 3323074 | 2018-12-10 05:21:48 | 2018-12-23 13:14:29 | 2018-12-23 14:06:29 | 0:52:00 | 0:06:28 | 0:45:32 | ovh | master | rhel | 7.5 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/kclient_workunit_kernel_untar_build.yaml} | 3 | |
Failure Reason:
Command failed on ovh068 with status 1: '\n sudo yum -y install ceph-radosgw\n ' |
||||||||||||||
dead | 3323075 | 2018-12-10 05:21:49 | 2018-12-23 13:20:20 | 2018-12-24 01:27:35 | 12:07:15 | ovh | master | ubuntu | 16.04 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/journal-repair.yaml whitelist_health.yaml} | 6 | |||
fail | 3323076 | 2018-12-10 05:21:49 | 2018-12-23 13:21:54 | 2018-12-23 13:49:53 | 0:27:59 | ovh | master | ubuntu | 18.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/kclient_workunit_misc.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh086.front.sepia.ceph.com |
||||||||||||||
fail | 3323077 | 2018-12-10 05:21:50 | 2018-12-23 13:22:11 | 2018-12-23 14:14:10 | 0:51:59 | ovh | master | ubuntu | 18.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/kclient_workunit_o_trunc.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh034.front.sepia.ceph.com |
||||||||||||||
pass | 3323078 | 2018-12-10 05:21:51 | 2018-12-23 13:26:25 | 2018-12-23 15:12:26 | 1:46:01 | 0:32:12 | 1:13:49 | ovh | master | centos | 7.4 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/mds-flush.yaml whitelist_health.yaml} | 6 | |
fail | 3323079 | 2018-12-10 05:21:51 | 2018-12-23 13:34:06 | 2018-12-23 14:30:06 | 0:56:00 | 0:07:34 | 0:48:26 | ovh | master | rhel | 7.5 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/kclient_workunit_snaps.yaml} | 3 | |
Failure Reason:
Command failed on ovh048 with status 1: '\n sudo yum -y install ceph-radosgw\n ' |
||||||||||||||
fail | 3323080 | 2018-12-10 05:21:52 | 2018-12-23 13:36:06 | 2018-12-23 14:02:05 | 0:25:59 | ovh | master | ubuntu | 18.04 | kcephfs/thrash/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} thrashers/mds.yaml thrashosds-health.yaml whitelist_health.yaml workloads/kclient_workunit_suites_ffsb.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh028.front.sepia.ceph.com |
||||||||||||||
fail | 3323081 | 2018-12-10 05:21:53 | 2018-12-23 13:40:06 | 2018-12-23 15:26:06 | 1:46:00 | 0:09:18 | 1:36:42 | ovh | master | rhel | 7.5 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/mds-full.yaml whitelist_health.yaml} | 6 | |
Failure Reason:
Command failed on ovh061 with status 1: '\n sudo yum -y install ceph-radosgw\n ' |
||||||||||||||
pass | 3323082 | 2018-12-10 05:21:53 | 2018-12-23 13:45:55 | 2018-12-23 14:55:55 | 1:10:00 | 0:47:42 | 0:22:18 | ovh | master | ubuntu | 16.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/kclient_workunit_suites_dbench.yaml} | 3 | |
fail | 3323083 | 2018-12-10 05:21:54 | 2018-12-23 13:48:13 | 2018-12-23 15:08:13 | 1:20:00 | 0:52:00 | 0:28:00 | ovh | master | centos | 7.4 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/kclient_workunit_suites_ffsb.yaml} | 3 | |
Failure Reason:
"2018-12-23 14:25:45.217483 mon.b mon.0 158.69.69.67:6789/0 146 : cluster [WRN] Health check failed: 1 MDSs report slow metadata IOs (MDS_SLOW_METADATA_IO)" in cluster log |
||||||||||||||
pass | 3323084 | 2018-12-10 05:21:55 | 2018-12-23 13:48:52 | 2018-12-23 17:16:54 | 3:28:02 | 0:28:47 | 2:59:15 | ovh | master | centos | 7.4 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/pool-perm.yaml whitelist_health.yaml} | 6 | |
fail | 3323085 | 2018-12-10 05:21:55 | 2018-12-23 13:50:06 | 2018-12-23 14:36:06 | 0:46:00 | ovh | master | ubuntu | 18.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/kclient_workunit_suites_fsstress.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh084.front.sepia.ceph.com |
||||||||||||||
fail | 3323086 | 2018-12-10 05:21:56 | 2018-12-23 14:00:31 | 2018-12-23 14:50:31 | 0:50:00 | 0:07:06 | 0:42:54 | ovh | master | rhel | 7.5 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/kclient_workunit_suites_fsx.yaml} | 3 | |
Failure Reason:
Command failed on ovh052 with status 1: '\n sudo yum -y install ceph-radosgw\n ' |
||||||||||||||
pass | 3323087 | 2018-12-10 05:21:57 | 2018-12-23 14:02:18 | 2018-12-23 19:48:22 | 5:46:04 | 0:24:25 | 5:21:39 | ovh | master | ubuntu | 16.04 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/sessionmap.yaml whitelist_health.yaml} | 6 | |
pass | 3323088 | 2018-12-10 05:21:58 | 2018-12-23 14:02:34 | 2018-12-23 14:52:34 | 0:50:00 | 0:33:06 | 0:16:54 | ovh | master | ubuntu | 16.04 | kcephfs/thrash/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} thrashers/mon.yaml thrashosds-health.yaml whitelist_health.yaml workloads/kclient_workunit_suites_iozone.yaml} | 3 | |
pass | 3323089 | 2018-12-10 05:21:58 | 2018-12-23 14:06:31 | 2018-12-23 15:24:31 | 1:18:00 | 0:26:56 | 0:51:04 | ovh | master | ubuntu | 16.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/kclient_workunit_suites_fsync.yaml} | 3 | |
fail | 3323090 | 2018-12-10 05:21:59 | 2018-12-23 14:12:31 | 2018-12-23 19:30:35 | 5:18:04 | ovh | master | ubuntu | 18.04 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/strays.yaml whitelist_health.yaml} | 6 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh085.front.sepia.ceph.com |
||||||||||||||
fail | 3323091 | 2018-12-10 05:22:00 | 2018-12-23 14:14:13 | 2018-12-23 15:04:13 | 0:50:00 | ovh | master | ubuntu | 18.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/kclient_workunit_suites_iozone.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh064.front.sepia.ceph.com |
||||||||||||||
pass | 3323092 | 2018-12-10 05:22:00 | 2018-12-23 14:18:22 | 2018-12-23 15:00:22 | 0:42:00 | 0:20:05 | 0:21:55 | ovh | master | centos | 7.4 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/kclient_workunit_suites_pjd.yaml} | 3 | |
pass | 3323093 | 2018-12-10 05:22:01 | 2018-12-23 14:20:24 | 2018-12-23 17:12:26 | 2:52:02 | 0:54:01 | 1:58:01 | ovh | master | centos | 7.4 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/volume-client.yaml whitelist_health.yaml} | 6 | |
pass | 3323094 | 2018-12-10 05:22:02 | 2018-12-23 14:22:45 | 2018-12-23 14:56:45 | 0:34:00 | 0:19:38 | 0:14:22 | ovh | master | ubuntu | 16.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/kclient_workunit_trivial_sync.yaml} | 3 | |
fail | 3323095 | 2018-12-10 05:22:02 | 2018-12-23 14:26:24 | 2018-12-23 14:44:24 | 0:18:00 | ovh | master | ubuntu | 18.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/kclient_workunit_direct_io.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh029.front.sepia.ceph.com |
||||||||||||||
fail | 3323096 | 2018-12-10 05:22:03 | 2018-12-23 14:30:18 | 2018-12-23 15:04:18 | 0:34:00 | ovh | master | ubuntu | 18.04 | kcephfs/mixed-clients/{begin.yaml clusters/1-mds-2-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/kernel_cfuse_workunits_dbench_iozone.yaml} | 4 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh036.front.sepia.ceph.com |
||||||||||||||
pass | 3323097 | 2018-12-10 05:22:04 | 2018-12-23 14:31:39 | 2018-12-23 18:45:42 | 4:14:03 | 0:22:53 | 3:51:10 | ovh | master | ubuntu | 16.04 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/auto-repair.yaml whitelist_health.yaml} | 6 | |
fail | 3323098 | 2018-12-10 05:22:04 | 2018-12-23 14:35:07 | 2018-12-23 15:37:07 | 1:02:00 | 0:07:08 | 0:54:52 | ovh | master | rhel | 7.5 | kcephfs/thrash/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} thrashers/default.yaml thrashosds-health.yaml whitelist_health.yaml workloads/kclient_workunit_suites_ffsb.yaml} | 3 | |
Failure Reason:
Command failed on ovh049 with status 1: '\n sudo yum -y install ceph-radosgw\n ' |
||||||||||||||
pass | 3323099 | 2018-12-10 05:22:05 | 2018-12-23 14:36:18 | 2018-12-23 16:04:18 | 1:28:00 | 0:52:40 | 0:35:20 | ovh | master | centos | 7.4 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/kclient_workunit_kernel_untar_build.yaml} | 3 | |
pass | 3323100 | 2018-12-10 05:22:06 | 2018-12-23 14:44:26 | 2018-12-23 15:32:26 | 0:48:00 | 0:20:43 | 0:27:17 | ovh | master | ubuntu | 16.04 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/backtrace.yaml whitelist_health.yaml} | 6 | |
fail | 3323101 | 2018-12-10 05:22:06 | 2018-12-23 14:50:22 | 2018-12-23 15:48:22 | 0:58:00 | 0:07:12 | 0:50:48 | ovh | master | rhel | 7.5 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/kclient_workunit_misc.yaml} | 3 | |
Failure Reason:
Command failed on ovh092 with status 1: '\n sudo yum -y install ceph-radosgw\n ' |
||||||||||||||
fail | 3323102 | 2018-12-10 05:22:07 | 2018-12-23 14:50:32 | 2018-12-23 15:36:32 | 0:46:00 | ovh | master | ubuntu | 18.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/kclient_workunit_o_trunc.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh054.front.sepia.ceph.com |
||||||||||||||
dead | 3323103 | 2018-12-10 05:22:08 | 2018-12-23 14:52:36 | 2018-12-23 21:34:42 | 6:42:06 | ovh | master | rhel | 7.5 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/client-limits.yaml whitelist_health.yaml} | 5 | |||
Failure Reason:
SSH connection to ovh031 was lost: 'sudo rpm -ivh --oldpackage --replacefiles --replacepkgs /tmp/kernel.x86_64.rpm' |
||||||||||||||
fail | 3323104 | 2018-12-10 05:22:09 | 2018-12-23 14:56:07 | 2018-12-23 15:14:07 | 0:18:00 | ovh | master | ubuntu | 18.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/kclient_workunit_snaps.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh059.front.sepia.ceph.com |
||||||||||||||
pass | 3323105 | 2018-12-10 05:22:09 | 2018-12-23 14:56:46 | 2018-12-23 15:52:47 | 0:56:01 | 0:31:49 | 0:24:12 | ovh | master | ubuntu | 16.04 | kcephfs/thrash/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} thrashers/mds.yaml thrashosds-health.yaml whitelist_health.yaml workloads/kclient_workunit_suites_iozone.yaml} | 3 | |
fail | 3323106 | 2018-12-10 05:22:10 | 2018-12-23 14:58:11 | 2018-12-23 20:10:15 | 5:12:04 | ovh | master | ubuntu | 18.04 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/client-recovery.yaml whitelist_health.yaml} | 6 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh031.front.sepia.ceph.com |
||||||||||||||
fail | 3323107 | 2018-12-10 05:22:11 | 2018-12-23 15:00:34 | 2018-12-23 16:04:34 | 1:04:00 | 0:07:00 | 0:57:00 | ovh | master | rhel | 7.5 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/kclient_workunit_suites_dbench.yaml} | 3 | |
Failure Reason:
Command failed on ovh064 with status 1: '\n sudo yum -y install ceph-radosgw\n ' |
||||||||||||||
pass | 3323108 | 2018-12-10 05:22:11 | 2018-12-23 15:04:15 | 2018-12-23 16:24:15 | 1:20:00 | 0:36:57 | 0:43:03 | ovh | master | ubuntu | 16.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/kclient_workunit_suites_ffsb.yaml} | 3 | |
pass | 3323109 | 2018-12-10 05:22:12 | 2018-12-23 15:04:19 | 2018-12-23 16:56:20 | 1:52:01 | 0:22:53 | 1:29:08 | ovh | master | ubuntu | 16.04 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/config-commands.yaml whitelist_health.yaml} | 6 | |
pass | 3323110 | 2018-12-10 05:22:13 | 2018-12-23 15:08:26 | 2018-12-23 17:02:27 | 1:54:01 | 1:13:01 | 0:41:00 | ovh | master | centos | 7.4 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/kclient_workunit_suites_fsstress.yaml} | 3 | |
pass | 3323111 | 2018-12-10 05:22:13 | 2018-12-23 15:12:28 | 2018-12-23 16:14:28 | 1:02:00 | 0:48:02 | 0:13:58 | ovh | master | centos | 7.4 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/kclient_workunit_suites_fsx.yaml} | 3 | |
fail | 3323112 | 2018-12-10 05:22:14 | 2018-12-23 15:14:19 | 2018-12-23 17:12:20 | 1:58:01 | 0:09:10 | 1:48:51 | ovh | master | rhel | 7.5 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/damage.yaml whitelist_health.yaml} | 6 | |
Failure Reason:
Command failed on ovh099 with status 1: '\n sudo yum -y install ceph-radosgw\n ' |
||||||||||||||
fail | 3323113 | 2018-12-10 05:22:15 | 2018-12-23 15:24:33 | 2018-12-23 16:16:33 | 0:52:00 | 0:07:10 | 0:44:50 | ovh | master | rhel | 7.5 | kcephfs/thrash/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} thrashers/mon.yaml thrashosds-health.yaml whitelist_health.yaml workloads/kclient_workunit_suites_ffsb.yaml} | 3 | |
Failure Reason:
Command failed on ovh083 with status 1: '\n sudo yum -y install ceph-radosgw\n ' |
||||||||||||||
fail | 3323114 | 2018-12-10 05:22:16 | 2018-12-23 15:26:13 | 2018-12-23 15:48:12 | 0:21:59 | ovh | master | ubuntu | 18.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/kclient_workunit_suites_fsync.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh086.front.sepia.ceph.com |
||||||||||||||
fail | 3323115 | 2018-12-10 05:22:16 | 2018-12-23 15:32:28 | 2018-12-23 18:44:31 | 3:12:03 | 0:09:06 | 3:02:57 | ovh | master | rhel | 7.5 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/data-scan.yaml whitelist_health.yaml} | 6 | |
Failure Reason:
Command failed on ovh029 with status 1: '\n sudo yum -y install ceph-radosgw\n ' |
||||||||||||||
fail | 3323116 | 2018-12-10 05:22:17 | 2018-12-23 15:36:41 | 2018-12-23 15:50:40 | 0:13:59 | ovh | master | ubuntu | 18.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/kclient_workunit_suites_iozone.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh045.front.sepia.ceph.com |
||||||||||||||
fail | 3323117 | 2018-12-10 05:22:18 | 2018-12-23 15:37:09 | 2018-12-23 15:55:08 | 0:17:59 | ovh | master | ubuntu | 18.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/kclient_workunit_suites_pjd.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh084.front.sepia.ceph.com |
||||||||||||||
pass | 3323118 | 2018-12-10 05:22:18 | 2018-12-23 15:42:27 | 2018-12-23 18:52:29 | 3:10:02 | 0:37:09 | 2:32:53 | ovh | master | centos | 7.4 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/failover.yaml whitelist_health.yaml} | 6 | |
fail | 3323119 | 2018-12-10 05:22:19 | 2018-12-23 15:44:44 | 2018-12-23 16:06:44 | 0:22:00 | ovh | master | ubuntu | 18.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/kclient_workunit_trivial_sync.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh067.front.sepia.ceph.com |
||||||||||||||
fail | 3323120 | 2018-12-10 05:22:20 | 2018-12-23 15:48:24 | 2018-12-23 16:04:23 | 0:15:59 | ovh | master | ubuntu | 18.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/kclient_workunit_direct_io.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh069.front.sepia.ceph.com |
||||||||||||||
fail | 3323121 | 2018-12-10 05:22:21 | 2018-12-23 15:48:24 | 2018-12-23 16:24:24 | 0:36:00 | ovh | master | ubuntu | 18.04 | kcephfs/mixed-clients/{begin.yaml clusters/1-mds-2-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/kernel_cfuse_workunits_untarbuild_blogbench.yaml} | 4 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh006.front.sepia.ceph.com |
||||||||||||||
pass | 3323122 | 2018-12-10 05:22:21 | 2018-12-23 15:50:42 | 2018-12-23 18:30:44 | 2:40:02 | 0:38:03 | 2:01:59 | ovh | master | centos | 7.4 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/forward-scrub.yaml whitelist_health.yaml} | 6 | |
pass | 3323123 | 2018-12-10 05:22:22 | 2018-12-23 15:52:49 | 2018-12-23 17:00:49 | 1:08:00 | 0:41:07 | 0:26:53 | ovh | master | centos | 7.4 | kcephfs/thrash/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} thrashers/default.yaml thrashosds-health.yaml whitelist_health.yaml workloads/kclient_workunit_suites_iozone.yaml} | 3 | |
fail | 3323124 | 2018-12-10 05:22:23 | 2018-12-23 15:54:08 | 2018-12-23 16:56:08 | 1:02:00 | 0:07:04 | 0:54:56 | ovh | master | rhel | 7.5 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/kclient_workunit_kernel_untar_build.yaml} | 3 | |
Failure Reason:
Command failed on ovh084 with status 1: '\n sudo yum -y install ceph-radosgw\n ' |
||||||||||||||
fail | 3323125 | 2018-12-10 05:22:23 | 2018-12-23 15:55:23 | 2018-12-23 19:31:25 | 3:36:02 | ovh | master | ubuntu | 18.04 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/journal-repair.yaml whitelist_health.yaml} | 6 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh086.front.sepia.ceph.com |
||||||||||||||
pass | 3323126 | 2018-12-10 05:22:24 | 2018-12-23 16:04:30 | 2018-12-23 17:56:31 | 1:52:01 | 1:23:49 | 0:28:12 | ovh | master | ubuntu | 18.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/kclient_workunit_misc.yaml} | 3 | |
pass | 3323127 | 2018-12-10 05:22:25 | 2018-12-23 16:04:30 | 2018-12-23 17:16:31 | 1:12:01 | 0:40:26 | 0:31:35 | ovh | master | centos | 7.4 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/kclient_workunit_o_trunc.yaml} | 3 | |
pass | 3323128 | 2018-12-10 05:22:25 | 2018-12-23 16:04:36 | 2018-12-23 23:06:42 | 7:02:06 | 0:32:37 | 6:29:29 | ovh | master | centos | 7.4 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/mds-flush.yaml whitelist_health.yaml} | 6 | |
fail | 3323129 | 2018-12-10 05:22:26 | 2018-12-23 16:06:52 | 2018-12-23 17:44:53 | 1:38:01 | 1:17:10 | 0:20:51 | ovh | master | centos | 7.4 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/kclient_workunit_snaps.yaml} | 3 | |
Failure Reason:
"2018-12-23 17:08:55.475895 mon.a mon.0 158.69.68.184:6789/0 527 : cluster [ERR] Health check failed: mon a is very low on available space (MON_DISK_CRIT)" in cluster log |
||||||||||||||
fail | 3323130 | 2018-12-10 05:22:27 | 2018-12-23 16:14:28 | 2018-12-23 17:16:28 | 1:02:00 | 0:07:12 | 0:54:48 | ovh | master | rhel | 7.5 | kcephfs/thrash/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} thrashers/mds.yaml thrashosds-health.yaml whitelist_health.yaml workloads/kclient_workunit_suites_ffsb.yaml} | 3 | |
Failure Reason:
Command failed on ovh028 with status 1: '\n sudo yum -y install ceph-radosgw\n ' |
||||||||||||||
dead | 3323131 | 2018-12-10 05:22:28 | 2018-12-23 16:14:29 | 2018-12-24 04:16:40 | 12:02:11 | ovh | master | centos | 7.4 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/mds-full.yaml whitelist_health.yaml} | — | |||
pass | 3323132 | 2018-12-10 05:22:28 | 2018-12-23 16:16:35 | 2018-12-23 18:00:36 | 1:44:01 | 1:01:25 | 0:42:36 | ovh | master | ubuntu | 16.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/kclient_workunit_suites_dbench.yaml} | 3 | |
fail | 3323133 | 2018-12-10 05:22:29 | 2018-12-23 16:18:51 | 2018-12-23 17:10:50 | 0:51:59 | 0:07:00 | 0:44:59 | ovh | master | rhel | 7.5 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/kclient_workunit_suites_ffsb.yaml} | 3 | |
Failure Reason:
Command failed on ovh009 with status 1: '\n sudo yum -y install ceph-radosgw\n ' |
||||||||||||||
dead | 3323134 | 2018-12-10 05:22:30 | 2018-12-23 16:24:28 | 2018-12-24 04:26:39 | 12:02:11 | ovh | master | centos | 7.4 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/pool-perm.yaml whitelist_health.yaml} | — | |||
fail | 3323135 | 2018-12-10 05:22:30 | 2018-12-23 16:24:28 | 2018-12-23 17:32:28 | 1:08:00 | 0:07:07 | 1:00:53 | ovh | master | rhel | 7.5 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/kclient_workunit_suites_fsstress.yaml} | 3 | |
Failure Reason:
Command failed on ovh089 with status 1: '\n sudo yum -y install ceph-radosgw\n ' |
||||||||||||||
fail | 3323136 | 2018-12-10 05:22:31 | 2018-12-23 16:56:22 | 2018-12-23 18:00:22 | 1:04:00 | 0:08:04 | 0:55:56 | ovh | master | rhel | 7.5 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/kclient_workunit_suites_fsx.yaml} | 3 | |
Failure Reason:
Command failed on ovh012 with status 1: '\n sudo yum -y install ceph-radosgw\n ' |
||||||||||||||
fail | 3323137 | 2018-12-10 05:22:32 | 2018-12-23 16:56:22 | 2018-12-23 19:08:23 | 2:12:01 | ovh | master | ubuntu | 18.04 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/sessionmap.yaml whitelist_health.yaml} | 6 | |||
pass | 3323138 | 2018-12-10 05:22:32 | 2018-12-23 17:00:51 | 2018-12-23 17:48:51 | 0:48:00 | 0:34:28 | 0:13:32 | ovh | master | ubuntu | 16.04 | kcephfs/thrash/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} thrashers/mon.yaml thrashosds-health.yaml whitelist_health.yaml workloads/kclient_workunit_suites_iozone.yaml} | 3 | |
fail | 3323139 | 2018-12-10 05:22:33 | 2018-12-23 17:02:37 | 2018-12-23 17:36:37 | 0:34:00 | ovh | master | ubuntu | 18.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/kclient_workunit_suites_fsync.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh034.front.sepia.ceph.com |
||||||||||||||
fail | 3323140 | 2018-12-10 05:22:34 | 2018-12-23 17:04:13 | 2018-12-23 20:26:15 | 3:22:02 | 0:09:01 | 3:13:01 | ovh | master | rhel | 7.5 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/strays.yaml whitelist_health.yaml} | 6 | |
Failure Reason:
Command failed on ovh056 with status 1: '\n sudo yum -y install ceph-radosgw\n ' |
||||||||||||||
pass | 3323141 | 2018-12-10 05:22:35 | 2018-12-23 17:10:52 | 2018-12-23 18:08:52 | 0:58:00 | 0:31:56 | 0:26:04 | ovh | master | ubuntu | 16.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/kclient_workunit_suites_iozone.yaml} | 3 | |
fail | 3323142 | 2018-12-10 05:22:35 | 2018-12-23 17:12:33 | 2018-12-23 18:04:32 | 0:51:59 | 0:07:10 | 0:44:49 | ovh | master | rhel | 7.5 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/kclient_workunit_suites_pjd.yaml} | 3 | |
Failure Reason:
Command failed on ovh014 with status 1: '\n sudo yum -y install ceph-radosgw\n ' |
||||||||||||||
dead | 3323143 | 2018-12-10 05:22:36 | 2018-12-23 17:12:33 | 2018-12-23 19:22:34 | 2:10:01 | ovh | master | rhel | 7.5 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/volume-client.yaml whitelist_health.yaml} | 6 | |||
Failure Reason:
SSH connection to ovh054 was lost: 'sudo rpm -ivh --oldpackage --replacefiles --replacepkgs /tmp/kernel.x86_64.rpm' |
||||||||||||||
fail | 3323144 | 2018-12-10 05:22:37 | 2018-12-23 17:13:21 | 2018-12-23 17:39:21 | 0:26:00 | ovh | master | ubuntu | 18.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/kclient_workunit_trivial_sync.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh042.front.sepia.ceph.com |
||||||||||||||
pass | 3323145 | 2018-12-10 05:22:37 | 2018-12-23 17:16:34 | 2018-12-23 17:56:33 | 0:39:59 | 0:17:30 | 0:22:29 | ovh | master | ubuntu | 16.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/kclient_workunit_direct_io.yaml} | 3 | |
pass | 3323146 | 2018-12-10 05:22:38 | 2018-12-23 17:16:33 | 2018-12-23 18:50:34 | 1:34:01 | 0:46:08 | 0:47:53 | ovh | master | centos | 7.4 | kcephfs/mixed-clients/{begin.yaml clusters/1-mds-2-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/kernel_cfuse_workunits_dbench_iozone.yaml} | 4 | |
fail | 3323147 | 2018-12-10 05:22:39 | 2018-12-23 17:16:55 | 2018-12-23 20:38:58 | 3:22:03 | ovh | master | ubuntu | 18.04 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/auto-repair.yaml whitelist_health.yaml} | 6 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh094.front.sepia.ceph.com |
||||||||||||||
fail | 3323148 | 2018-12-10 05:22:39 | 2018-12-23 17:32:34 | 2018-12-23 18:32:34 | 1:00:00 | 0:06:34 | 0:53:26 | ovh | master | rhel | 7.5 | kcephfs/thrash/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} thrashers/default.yaml thrashosds-health.yaml whitelist_health.yaml workloads/kclient_workunit_suites_ffsb.yaml} | 3 | |
Failure Reason:
Command failed on ovh015 with status 1: '\n sudo yum -y install ceph-radosgw\n ' |
||||||||||||||
dead | 3323149 | 2018-12-10 05:22:40 | 2018-12-23 17:36:50 | 2018-12-23 20:26:51 | 2:50:01 | 2:22:44 | 0:27:17 | ovh | master | centos | 7.4 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/kclient_workunit_kernel_untar_build.yaml} | 3 | |
Failure Reason:
while scanning a plain scalar in "/tmp/teuth_ansible_failures_rFYDmG", line 1, column 13441 found unexpected ':' in "/tmp/teuth_ansible_failures_rFYDmG", line 1, column 13444 Please check http://pyyaml.org/wiki/YAMLColonInFlowContext for details. |
||||||||||||||
fail | 3323150 | 2018-12-10 05:22:41 | 2018-12-23 17:39:23 | 2018-12-23 20:11:24 | 2:32:01 | ovh | master | ubuntu | 18.04 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/backtrace.yaml whitelist_health.yaml} | 5 | |||
fail | 3323151 | 2018-12-10 05:22:42 | 2018-12-23 17:45:05 | 2018-12-23 18:17:05 | 0:32:00 | ovh | master | centos | 7.4 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/kclient_workunit_misc.yaml} | 3 | |||
fail | 3323152 | 2018-12-10 05:22:42 | 2018-12-23 17:48:55 | 2018-12-23 18:08:54 | 0:19:59 | ovh | master | ubuntu | 18.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/kclient_workunit_o_trunc.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh012.front.sepia.ceph.com |
||||||||||||||
fail | 3323153 | 2018-12-10 05:22:43 | 2018-12-23 17:56:33 | 2018-12-23 22:12:36 | 4:16:03 | 0:12:59 | 4:03:04 | ovh | master | rhel | 7.5 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/client-limits.yaml whitelist_health.yaml} | 6 | |
Failure Reason:
Command failed on ovh082 with status 1: '\n sudo yum -y install ceph-radosgw\n ' |
||||||||||||||
pass | 3323154 | 2018-12-10 05:22:44 | 2018-12-23 17:56:35 | 2018-12-23 19:22:35 | 1:26:00 | 1:17:49 | 0:08:11 | ovh | master | ubuntu | 16.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/kclient_workunit_snaps.yaml} | 3 | |
fail | 3323155 | 2018-12-10 05:22:44 | 2018-12-23 18:00:33 | 2018-12-23 18:20:33 | 0:20:00 | ovh | master | ubuntu | 18.04 | kcephfs/thrash/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} thrashers/mds.yaml thrashosds-health.yaml whitelist_health.yaml workloads/kclient_workunit_suites_iozone.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh006.front.sepia.ceph.com |
||||||||||||||
fail | 3323156 | 2018-12-10 05:22:45 | 2018-12-23 18:00:37 | 2018-12-23 18:44:37 | 0:44:00 | ovh | master | ubuntu | 18.04 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/client-recovery.yaml whitelist_health.yaml} | 6 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh026.front.sepia.ceph.com |
||||||||||||||
pass | 3323157 | 2018-12-10 05:22:46 | 2018-12-23 18:04:45 | 2018-12-23 19:16:45 | 1:12:00 | 0:43:03 | 0:28:57 | ovh | master | ubuntu | 16.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/kclient_workunit_suites_dbench.yaml} | 3 | |
fail | 3323158 | 2018-12-10 05:22:46 | 2018-12-23 18:06:03 | 2018-12-23 19:06:03 | 1:00:00 | ovh | master | rhel | 7.5 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/kclient_workunit_suites_ffsb.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh072.front.sepia.ceph.com |
||||||||||||||
dead | 3323159 | 2018-12-10 05:22:47 | 2018-12-23 18:06:17 | 2018-12-24 06:08:28 | 12:02:11 | ovh | master | rhel | 7.5 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/config-commands.yaml whitelist_health.yaml} | — | |||
fail | 3323160 | 2018-12-10 05:22:48 | 2018-12-23 18:09:01 | 2018-12-23 18:59:01 | 0:50:00 | ovh | master | ubuntu | 18.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/kclient_workunit_suites_fsstress.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh054.front.sepia.ceph.com |
||||||||||||||
fail | 3323161 | 2018-12-10 05:22:48 | 2018-12-23 18:09:01 | 2018-12-23 19:03:01 | 0:54:00 | 0:07:14 | 0:46:46 | ovh | master | rhel | 7.5 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/kclient_workunit_suites_fsx.yaml} | 3 | |
Failure Reason:
Command failed on ovh024 with status 1: '\n sudo yum -y install ceph-radosgw\n ' |
||||||||||||||
dead | 3323162 | 2018-12-10 05:22:49 | 2018-12-23 18:17:07 | 2018-12-23 20:39:08 | 2:22:01 | ovh | master | rhel | 7.5 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/damage.yaml whitelist_health.yaml} | 5 | |||
Failure Reason:
SSH connection to ovh031 was lost: 'sudo rpm -ivh --oldpackage --replacefiles --replacepkgs /tmp/kernel.x86_64.rpm' |
||||||||||||||
fail | 3323163 | 2018-12-10 05:22:50 | 2018-12-23 18:18:33 | 2018-12-23 19:18:33 | 1:00:00 | 0:46:52 | 0:13:08 | ovh | master | centos | 7.4 | kcephfs/thrash/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} thrashers/mon.yaml thrashosds-health.yaml whitelist_health.yaml workloads/kclient_workunit_suites_ffsb.yaml} | 3 | |
Failure Reason:
"2018-12-23 18:56:47.062158 mon.b mon.0 158.69.67.224:6789/0 42 : cluster [WRN] Health check failed: 1 MDSs report slow metadata IOs (MDS_SLOW_METADATA_IO)" in cluster log |
||||||||||||||
pass | 3323164 | 2018-12-10 05:22:51 | 2018-12-23 18:20:45 | 2018-12-23 18:56:45 | 0:36:00 | 0:17:34 | 0:18:26 | ovh | master | centos | 7.4 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/kclient_workunit_suites_fsync.yaml} | 3 | |
dead | 3323165 | 2018-12-10 05:22:51 | 2018-12-23 18:23:58 | 2018-12-23 20:13:59 | 1:50:01 | ovh | master | rhel | 7.5 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/data-scan.yaml whitelist_health.yaml} | 6 | |||
Failure Reason:
SSH connection to ovh085 was lost: 'sudo rpm -ivh --oldpackage --replacefiles --replacepkgs /tmp/kernel.x86_64.rpm' |
||||||||||||||
pass | 3323166 | 2018-12-10 05:22:52 | 2018-12-23 18:30:55 | 2018-12-23 19:42:55 | 1:12:00 | 0:31:51 | 0:40:09 | ovh | master | centos | 7.4 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/kclient_workunit_suites_iozone.yaml} | 3 | |
fail | 3323167 | 2018-12-10 05:22:53 | 2018-12-23 18:32:47 | 2018-12-23 19:32:47 | 1:00:00 | 0:07:24 | 0:52:36 | ovh | master | rhel | 7.5 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/kclient_workunit_suites_pjd.yaml} | 3 | |
Failure Reason:
Command failed on ovh052 with status 1: '\n sudo yum -y install ceph-radosgw\n ' |
||||||||||||||
dead | 3323168 | 2018-12-10 05:22:53 | 2018-12-23 18:44:25 | 2018-12-24 06:46:36 | 12:02:11 | ovh | master | ubuntu | 18.04 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/failover.yaml whitelist_health.yaml} | — | |||
fail | 3323169 | 2018-12-10 05:22:54 | 2018-12-23 18:44:32 | 2018-12-23 19:14:31 | 0:29:59 | ovh | master | ubuntu | 18.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/kclient_workunit_trivial_sync.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh009.front.sepia.ceph.com |
||||||||||||||
pass | 3323170 | 2018-12-10 05:22:55 | 2018-12-23 18:44:38 | 2018-12-23 19:30:38 | 0:46:00 | 0:21:04 | 0:24:56 | ovh | master | centos | 7.4 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/kclient_workunit_direct_io.yaml} | 3 | |
fail | 3323171 | 2018-12-10 05:22:55 | 2018-12-23 18:45:54 | 2018-12-23 19:07:53 | 0:21:59 | ovh | master | ubuntu | 18.04 | kcephfs/mixed-clients/{begin.yaml clusters/1-mds-2-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/kernel_cfuse_workunits_untarbuild_blogbench.yaml} | 4 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh085.front.sepia.ceph.com |
||||||||||||||
dead | 3323172 | 2018-12-10 05:22:56 | 2018-12-23 18:47:44 | 2018-12-24 06:49:55 | 12:02:11 | ovh | master | rhel | 7.5 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/forward-scrub.yaml whitelist_health.yaml} | — | |||
pass | 3323173 | 2018-12-10 05:22:57 | 2018-12-23 18:48:22 | 2018-12-23 20:18:22 | 1:30:00 | 0:56:58 | 0:33:02 | ovh | master | ubuntu | 16.04 | kcephfs/thrash/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} thrashers/default.yaml thrashosds-health.yaml whitelist_health.yaml workloads/kclient_workunit_suites_iozone.yaml} | 3 | |
pass | 3323174 | 2018-12-10 05:22:58 | 2018-12-23 18:50:45 | 2018-12-23 20:54:46 | 2:04:01 | 1:33:04 | 0:30:57 | ovh | master | centos | 7.4 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/kclient_workunit_kernel_untar_build.yaml} | 3 | |
fail | 3323175 | 2018-12-10 05:22:58 | 2018-12-23 18:52:41 | 2018-12-23 21:48:43 | 2:56:02 | 0:28:51 | 2:27:11 | ovh | master | centos | 7.4 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/journal-repair.yaml whitelist_health.yaml} | 6 | |
Failure Reason:
Test failure: test_reset (tasks.cephfs.test_journal_repair.TestJournalRepair), test_reset (tasks.cephfs.test_journal_repair.TestJournalRepair) |
||||||||||||||
pass | 3323176 | 2018-12-10 05:22:59 | 2018-12-23 18:56:48 | 2018-12-23 20:20:49 | 1:24:01 | 1:05:51 | 0:18:10 | ovh | master | ubuntu | 16.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/kclient_workunit_misc.yaml} | 3 | |
fail | 3323177 | 2018-12-10 05:23:00 | 2018-12-23 18:58:25 | 2018-12-23 19:20:24 | 0:21:59 | ovh | master | ubuntu | 18.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/kclient_workunit_o_trunc.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh024.front.sepia.ceph.com |
||||||||||||||
fail | 3323178 | 2018-12-10 05:23:00 | 2018-12-23 18:59:02 | 2018-12-23 20:43:03 | 1:44:01 | ovh | master | centos | 7.4 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/mds-flush.yaml whitelist_health.yaml} | 5 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh085.front.sepia.ceph.com |
||||||||||||||
fail | 3323179 | 2018-12-10 05:23:01 | 2018-12-23 19:03:03 | 2018-12-23 20:13:04 | 1:10:01 | 0:07:14 | 1:02:47 | ovh | master | rhel | 7.5 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/kclient_workunit_snaps.yaml} | 3 | |
Failure Reason:
Command failed on ovh009 with status 1: '\n sudo yum -y install ceph-radosgw\n ' |
||||||||||||||
fail | 3323180 | 2018-12-10 05:23:02 | 2018-12-23 19:06:15 | 2018-12-23 19:56:15 | 0:50:00 | ovh | master | rhel | 7.5 | kcephfs/thrash/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} thrashers/mds.yaml thrashosds-health.yaml whitelist_health.yaml workloads/kclient_workunit_suites_ffsb.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh072.front.sepia.ceph.com |
||||||||||||||
dead | 3323181 | 2018-12-10 05:23:02 | 2018-12-23 19:08:07 | 2018-12-24 07:10:18 | 12:02:11 | ovh | master | centos | 7.4 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/mds-full.yaml whitelist_health.yaml} | — | |||
dead | 3323182 | 2018-12-10 05:23:03 | 2018-12-23 19:08:24 | 2018-12-23 19:56:24 | 0:48:00 | ovh | master | ubuntu | 18.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/kclient_workunit_suites_dbench.yaml} | 3 | |||
Failure Reason:
SSH connection to ovh054 was lost: 'sudo lsb_release -is' |
||||||||||||||
fail | 3323183 | 2018-12-10 05:23:04 | 2018-12-23 19:14:43 | 2018-12-23 20:20:43 | 1:06:00 | 0:44:45 | 0:21:15 | ovh | master | ubuntu | 16.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/kclient_workunit_suites_ffsb.yaml} | 3 | |
Failure Reason:
"2018-12-23 19:49:32.751897 mon.b mon.0 158.69.65.161:6789/0 147 : cluster [WRN] Health check failed: 1 MDSs report slow metadata IOs (MDS_SLOW_METADATA_IO)" in cluster log |
||||||||||||||
dead | 3323184 | 2018-12-10 05:23:04 | 2018-12-23 19:16:57 | 2018-12-24 07:19:08 | 12:02:11 | ovh | master | ubuntu | 18.04 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/pool-perm.yaml whitelist_health.yaml} | — | |||
fail | 3323185 | 2018-12-10 05:23:05 | 2018-12-23 19:18:45 | 2018-12-23 19:32:45 | 0:14:00 | ovh | master | ubuntu | 18.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/kclient_workunit_suites_fsstress.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@ovh083.front.sepia.ceph.com |
||||||||||||||
fail | 3323186 | 2018-12-10 05:23:06 | 2018-12-23 19:20:36 | 2018-12-23 20:20:36 | 1:00:00 | 0:07:06 | 0:52:54 | ovh | master | rhel | 7.5 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/kclient_workunit_suites_fsx.yaml} | 3 | |
Failure Reason:
Command failed on ovh098 with status 1: '\n sudo yum -y install ceph-radosgw\n ' |
||||||||||||||
pass | 3323187 | 2018-12-10 05:23:07 | 2018-12-23 19:22:36 | 2018-12-23 21:08:36 | 1:46:00 | 0:25:05 | 1:20:55 | ovh | master | ubuntu | 16.04 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/sessionmap.yaml whitelist_health.yaml} | 6 | |
pass | 3323188 | 2018-12-10 05:23:07 | 2018-12-23 19:22:36 | 2018-12-23 20:16:36 | 0:54:00 | 0:32:10 | 0:21:50 | ovh | master | ubuntu | 16.04 | kcephfs/thrash/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} thrashers/mon.yaml thrashosds-health.yaml whitelist_health.yaml workloads/kclient_workunit_suites_iozone.yaml} | 3 | |
fail | 3323189 | 2018-12-10 05:23:08 | 2018-12-23 19:30:47 | 2018-12-23 20:28:47 | 0:58:00 | 0:07:00 | 0:51:00 | ovh | master | rhel | 7.5 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/kclient_workunit_suites_fsync.yaml} | 3 | |
Failure Reason:
Command failed on ovh042 with status 1: '\n sudo yum -y install ceph-radosgw\n ' |
||||||||||||||
dead | 3323190 | 2018-12-10 05:23:09 | 2018-12-23 19:30:47 | 2018-12-24 07:32:58 | 12:02:11 | ovh | master | ubuntu | 16.04 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/strays.yaml whitelist_health.yaml} | — | |||
dead | 3323191 | 2018-12-10 05:23:09 | 2018-12-23 19:31:26 | 2018-12-24 07:33:55 | 12:02:29 | ovh | master | centos | 7.4 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/kclient_workunit_suites_iozone.yaml} | 3 | |||
pass | 3323192 | 2018-12-10 05:23:10 | 2018-12-23 19:32:57 | 2018-12-23 20:16:56 | 0:43:59 | 0:18:07 | 0:25:52 | ovh | master | centos | 7.4 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/kclient_workunit_suites_pjd.yaml} | 3 | |
pass | 3323193 | 2018-12-10 05:23:11 | 2018-12-23 19:32:57 | 2018-12-23 21:50:57 | 2:18:00 | 0:42:43 | 1:35:17 | ovh | master | ubuntu | 16.04 | kcephfs/recovery/{begin.yaml clusters/1-mds-4-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/volume-client.yaml whitelist_health.yaml} | 6 | |
pass | 3323194 | 2018-12-10 05:23:11 | 2018-12-23 19:43:07 | 2018-12-23 20:29:07 | 0:46:00 | 0:15:28 | 0:30:32 | ovh | master | ubuntu | 16.04 | kcephfs/cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml ms-die-on-skipped.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/kclient_workunit_trivial_sync.yaml} | 3 |