Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
pass 1541968 2017-08-19 05:20:17 2017-08-20 08:05:47 2017-08-20 09:25:40 1:19:53 0:22:07 0:57:46 smithi master kcephfs/cephfs/{clusters/fixed-3-cephfs.yaml conf.yaml inline/no.yaml objectstore/bluestore.yaml tasks/kclient_workunit_direct_io.yaml} 3
fail 1541969 2017-08-19 05:20:18 2017-08-20 08:05:50 2017-08-20 09:37:41 1:31:51 1:12:20 0:19:31 smithi master kcephfs/mixed-clients/{clusters/2-clients.yaml conf.yaml objectstore/bluestore.yaml tasks/kernel_cfuse_workunits_dbench_iozone.yaml} 4
Failure Reason:

"2017-08-20 08:46:05.845632 mon.b mon.1 172.21.15.45:6789/0 150 : cluster [WRN] Health check failed: 1/3 mons down, quorum b,c (MON_DOWN)" in cluster log

fail 1541970 2017-08-19 05:20:18 2017-08-20 08:06:52 2017-08-20 09:18:52 1:12:00 0:16:12 0:55:48 smithi master kcephfs/recovery/{clusters/4-remote-clients.yaml debug/mds_client.yaml dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore/bluestore.yaml tasks/auto-repair.yaml} 4
Failure Reason:

"2017-08-20 09:07:25.558523 mon.a mon.0 172.21.15.83:6789/0 201 : cluster [WRN] Health check failed: 1 filesystem is degraded (FS_DEGRADED)" in cluster log

fail 1541971 2017-08-19 05:20:19 2017-08-20 08:07:00 2017-08-20 09:33:00 1:26:00 1:04:20 0:21:40 smithi master kcephfs/thrash/{clusters/fixed-3-cephfs.yaml conf.yaml objectstore/bluestore.yaml thrashers/default.yaml workloads/kclient_workunit_suites_ffsb.yaml} 3
Failure Reason:

"2017-08-20 08:32:13.492363 mon.a mon.0 172.21.15.7:6789/0 111 : cluster [WRN] Health check failed: noscrub flag(s) set (OSDMAP_FLAGS)" in cluster log

pass 1541972 2017-08-19 05:20:20 2017-08-20 08:08:11 2017-08-20 09:44:09 1:35:58 1:14:20 0:21:38 smithi master kcephfs/cephfs/{clusters/fixed-3-cephfs.yaml conf.yaml inline/yes.yaml objectstore/filestore-xfs.yaml tasks/kclient_workunit_kernel_untar_build.yaml} 3
fail 1541973 2017-08-19 05:20:20 2017-08-20 08:09:26 2017-08-20 10:01:26 1:52:00 0:12:00 1:40:00 smithi master kcephfs/recovery/{clusters/4-remote-clients.yaml debug/mds_client.yaml dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore/filestore-xfs.yaml tasks/backtrace.yaml} 4
Failure Reason:

"2017-08-20 09:52:57.573173 mon.a mon.0 172.21.15.99:6789/0 198 : cluster [WRN] Health check failed: 1 filesystem is degraded (FS_DEGRADED)" in cluster log

pass 1541974 2017-08-19 05:20:21 2017-08-20 08:10:01 2017-08-20 10:06:04 1:56:03 1:06:19 0:49:44 smithi master kcephfs/cephfs/{clusters/fixed-3-cephfs.yaml conf.yaml inline/no.yaml objectstore/bluestore.yaml tasks/kclient_workunit_misc.yaml} 3
pass 1541975 2017-08-19 05:20:21 2017-08-20 08:13:21 2017-08-20 08:55:15 0:41:54 0:23:02 0:18:52 smithi master kcephfs/cephfs/{clusters/fixed-3-cephfs.yaml conf.yaml inline/yes.yaml objectstore/filestore-xfs.yaml tasks/kclient_workunit_o_trunc.yaml} 3
fail 1541976 2017-08-19 05:20:22 2017-08-20 08:13:48 2017-08-20 08:45:45 0:31:57 0:20:22 0:11:35 smithi master kcephfs/recovery/{clusters/4-remote-clients.yaml debug/mds_client.yaml dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore/bluestore.yaml tasks/client-limits.yaml} 4
Failure Reason:

"2017-08-20 08:31:45.158153 mon.a mon.0 172.21.15.150:6789/0 220 : cluster [WRN] Health check failed: 1 filesystem is degraded (FS_DEGRADED)" in cluster log

fail 1541977 2017-08-19 05:20:23 2017-08-20 08:15:10 2017-08-20 12:39:14 4:24:04 4:00:27 0:23:37 smithi master kcephfs/cephfs/{clusters/fixed-3-cephfs.yaml conf.yaml inline/no.yaml objectstore/bluestore.yaml tasks/kclient_workunit_snaps.yaml} 3
Failure Reason:

Command failed (workunit test fs/snaps/untar_snap_rm.sh) on smithi093 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=luminous TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/fs/snaps/untar_snap_rm.sh'

fail 1541978 2017-08-19 05:20:23 2017-08-20 08:15:12 2017-08-20 10:23:11 2:07:59 0:29:08 1:38:51 smithi master kcephfs/thrash/{clusters/fixed-3-cephfs.yaml conf.yaml objectstore/filestore-xfs.yaml thrashers/mds.yaml workloads/kclient_workunit_suites_iozone.yaml} 3
Failure Reason:

"2017-08-20 09:59:39.076418 mon.b mon.0 172.21.15.36:6789/0 179 : cluster [WRN] Health check failed: 1 filesystem is degraded (FS_DEGRADED)" in cluster log

dead 1541979 2017-08-19 05:20:24 2017-08-20 08:15:34 2017-08-20 20:22:17 12:06:43 smithi master kcephfs/recovery/{clusters/4-remote-clients.yaml debug/mds_client.yaml dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore/filestore-xfs.yaml tasks/client-recovery.yaml} 4
pass 1541980 2017-08-19 05:20:24 2017-08-20 08:19:03 2017-08-20 10:17:04 1:58:01 0:50:33 1:07:28 smithi master kcephfs/cephfs/{clusters/fixed-3-cephfs.yaml conf.yaml inline/yes.yaml objectstore/filestore-xfs.yaml tasks/kclient_workunit_suites_dbench.yaml} 3
fail 1541981 2017-08-19 05:20:25 2017-08-20 08:19:08 2017-08-20 10:07:04 1:47:56 0:54:27 0:53:29 smithi master kcephfs/cephfs/{clusters/fixed-3-cephfs.yaml conf.yaml inline/no.yaml objectstore/bluestore.yaml tasks/kclient_workunit_suites_ffsb.yaml} 3
Failure Reason:

"2017-08-20 08:49:43.705661 mon.b mon.0 172.21.15.59:6789/0 384 : cluster [WRN] Health check failed: 1/3 mons down, quorum b,c (MON_DOWN)" in cluster log

fail 1541982 2017-08-19 05:20:26 2017-08-20 08:19:12 2017-08-20 10:13:12 1:54:00 0:20:52 1:33:08 smithi master kcephfs/recovery/{clusters/4-remote-clients.yaml debug/mds_client.yaml dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore/bluestore.yaml tasks/config-commands.yaml} 4
Failure Reason:

"2017-08-20 09:57:53.167049 mon.a mon.0 172.21.15.200:6789/0 218 : cluster [WRN] Health check failed: 1 filesystem is degraded (FS_DEGRADED)" in cluster log

pass 1541983 2017-08-19 05:20:26 2017-08-20 08:19:16 2017-08-20 08:53:14 0:33:58 0:13:18 0:20:40 smithi master kcephfs/cephfs/{clusters/fixed-3-cephfs.yaml conf.yaml inline/yes.yaml objectstore/filestore-xfs.yaml tasks/kclient_workunit_suites_fsstress.yaml} 3
pass 1541984 2017-08-19 05:20:27 2017-08-20 08:19:52 2017-08-20 09:03:54 0:44:02 0:25:17 0:18:45 smithi master kcephfs/cephfs/{clusters/fixed-3-cephfs.yaml conf.yaml inline/no.yaml objectstore/bluestore.yaml tasks/kclient_workunit_suites_fsx.yaml} 3
fail 1541985 2017-08-19 05:20:28 2017-08-20 08:23:15 2017-08-20 09:15:12 0:51:57 0:28:00 0:23:57 smithi master kcephfs/recovery/{clusters/4-remote-clients.yaml debug/mds_client.yaml dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore/filestore-xfs.yaml tasks/damage.yaml} 4
Failure Reason:

"2017-08-20 08:45:17.968555 mon.a mon.0 172.21.15.103:6789/0 379 : cluster [ERR] Health check failed: 1 MDSs report damaged metadata (MDS_DAMAGE)" in cluster log

fail 1541986 2017-08-19 05:20:28 2017-08-20 08:23:39 2017-08-20 09:33:37 1:09:58 0:51:03 0:18:55 smithi master kcephfs/thrash/{clusters/fixed-3-cephfs.yaml conf.yaml objectstore/bluestore.yaml thrashers/mon.yaml workloads/kclient_workunit_suites_ffsb.yaml} 3
Failure Reason:

"2017-08-20 08:44:25.531996 mon.b mon.0 172.21.15.73:6789/0 178 : cluster [WRN] Health check failed: 1/3 mons down, quorum b,a (MON_DOWN)" in cluster log

pass 1541987 2017-08-19 05:20:29 2017-08-20 08:25:57 2017-08-20 08:51:55 0:25:58 0:10:20 0:15:38 smithi master kcephfs/cephfs/{clusters/fixed-3-cephfs.yaml conf.yaml inline/yes.yaml objectstore/filestore-xfs.yaml tasks/kclient_workunit_suites_fsync.yaml} 3
fail 1541988 2017-08-19 05:20:29 2017-08-20 08:25:55 2017-08-20 11:54:02 3:28:07 0:38:06 2:50:01 smithi master kcephfs/recovery/{clusters/4-remote-clients.yaml debug/mds_client.yaml dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore/bluestore.yaml tasks/data-scan.yaml} 4
Failure Reason:

Test failure: test_rebuild_simple_altpool (tasks.cephfs.test_data_scan.TestDataScan)

pass 1541989 2017-08-19 05:20:30 2017-08-20 08:25:56 2017-08-20 09:29:56 1:04:00 0:13:07 0:50:53 smithi master kcephfs/cephfs/{clusters/fixed-3-cephfs.yaml conf.yaml inline/no.yaml objectstore/bluestore.yaml tasks/kclient_workunit_suites_iozone.yaml} 3
pass 1541990 2017-08-19 05:20:31 2017-08-20 08:25:59 2017-08-20 12:00:03 3:34:04 2:30:07 1:03:57 smithi master kcephfs/cephfs/{clusters/fixed-3-cephfs.yaml conf.yaml inline/yes.yaml objectstore/filestore-xfs.yaml tasks/kclient_workunit_suites_pjd.yaml} 3
fail 1541991 2017-08-19 05:20:31 2017-08-20 08:28:31 2017-08-20 11:18:30 2:49:59 0:32:31 2:17:28 smithi master kcephfs/recovery/{clusters/4-remote-clients.yaml debug/mds_client.yaml dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore/filestore-xfs.yaml tasks/failover.yaml} 4
Failure Reason:

"2017-08-20 10:47:12.251454 mon.a mon.0 172.21.15.172:6789/0 213 : cluster [WRN] Health check failed: 1 filesystem is degraded (FS_DEGRADED)" in cluster log

pass 1541992 2017-08-19 05:20:32 2017-08-20 08:29:15 2017-08-20 09:01:13 0:31:58 0:12:00 0:19:58 smithi master kcephfs/cephfs/{clusters/fixed-3-cephfs.yaml conf.yaml inline/no.yaml objectstore/bluestore.yaml tasks/kclient_workunit_trivial_sync.yaml} 3
pass 1541993 2017-08-19 05:20:32 2017-08-20 08:30:51 2017-08-20 09:02:51 0:32:00 0:13:37 0:18:23 smithi master kcephfs/cephfs/{clusters/fixed-3-cephfs.yaml conf.yaml inline/yes.yaml objectstore/filestore-xfs.yaml tasks/kclient_workunit_direct_io.yaml} 3
fail 1541994 2017-08-19 05:20:33 2017-08-20 08:31:04 2017-08-20 11:27:06 2:56:02 2:12:43 0:43:19 smithi master kcephfs/mixed-clients/{clusters/2-clients.yaml conf.yaml objectstore/filestore-xfs.yaml tasks/kernel_cfuse_workunits_untarbuild_blogbench.yaml} 4
Failure Reason:

"2017-08-20 10:08:15.712618 mon.b mon.0 172.21.15.47:6789/0 260 : cluster [WRN] overall HEALTH_WARN 1/3 mons down, quorum b,c" in cluster log

fail 1541995 2017-08-19 05:20:34 2017-08-20 08:31:04 2017-08-20 13:57:11 5:26:07 0:34:38 4:51:29 smithi master kcephfs/recovery/{clusters/4-remote-clients.yaml debug/mds_client.yaml dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore/bluestore.yaml tasks/forward-scrub.yaml} 4
Failure Reason:

"2017-08-20 13:29:30.013826 mon.a mon.0 172.21.15.53:6789/0 475 : cluster [ERR] Health check failed: 1 MDSs report damaged metadata (MDS_DAMAGE)" in cluster log

fail 1541996 2017-08-19 05:20:34 2017-08-20 08:31:18 2017-08-20 09:29:19 0:58:01 0:35:53 0:22:08 smithi master kcephfs/thrash/{clusters/fixed-3-cephfs.yaml conf.yaml objectstore/filestore-xfs.yaml thrashers/default.yaml workloads/kclient_workunit_suites_iozone.yaml} 3
Failure Reason:

"2017-08-20 08:56:21.398266 mon.b mon.0 172.21.15.48:6789/0 176 : cluster [WRN] Health check failed: noscrub flag(s) set (OSDMAP_FLAGS)" in cluster log

pass 1541997 2017-08-19 05:20:35 2017-08-20 08:34:01 2017-08-20 09:43:56 1:09:55 0:46:08 0:23:47 smithi master kcephfs/cephfs/{clusters/fixed-3-cephfs.yaml conf.yaml inline/no.yaml objectstore/bluestore.yaml tasks/kclient_workunit_kernel_untar_build.yaml} 3
fail 1541998 2017-08-19 05:20:35 2017-08-20 08:34:56 2017-08-20 10:36:53 2:01:57 0:21:58 1:39:59 smithi master kcephfs/recovery/{clusters/4-remote-clients.yaml debug/mds_client.yaml dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore/filestore-xfs.yaml tasks/journal-repair.yaml} 4
Failure Reason:

"2017-08-20 10:16:46.148928 mon.a mon.0 172.21.15.183:6789/0 395 : cluster [ERR] Health check failed: 1 MDSs report damaged metadata (MDS_DAMAGE)" in cluster log

fail 1541999 2017-08-19 05:20:36 2017-08-20 08:35:35 2017-08-20 10:29:35 1:54:00 1:09:50 0:44:10 smithi master kcephfs/cephfs/{clusters/fixed-3-cephfs.yaml conf.yaml inline/yes.yaml objectstore/filestore-xfs.yaml tasks/kclient_workunit_misc.yaml} 3
Failure Reason:

"2017-08-20 09:24:07.066514 mon.b mon.0 172.21.15.143:6789/0 203 : cluster [WRN] Health check failed: 1/3 mons down, quorum b,c (MON_DOWN)" in cluster log

pass 1542000 2017-08-19 05:20:37 2017-08-20 08:37:09 2017-08-20 09:45:10 1:08:01 0:24:48 0:43:13 smithi master kcephfs/cephfs/{clusters/fixed-3-cephfs.yaml conf.yaml inline/no.yaml objectstore/bluestore.yaml tasks/kclient_workunit_o_trunc.yaml} 3
fail 1542001 2017-08-19 05:20:37 2017-08-20 08:37:09 2017-08-20 09:53:11 1:16:02 0:11:38 1:04:24 smithi master kcephfs/recovery/{clusters/4-remote-clients.yaml debug/mds_client.yaml dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore/bluestore.yaml tasks/mds-flush.yaml} 4
Failure Reason:

"2017-08-20 09:43:03.745130 mon.a mon.0 172.21.15.194:6789/0 206 : cluster [WRN] Health check failed: 1 filesystem is degraded (FS_DEGRADED)" in cluster log

fail 1542002 2017-08-19 05:20:38 2017-08-20 08:37:27 2017-08-20 13:09:33 4:32:06 3:50:59 0:41:07 smithi master kcephfs/cephfs/{clusters/fixed-3-cephfs.yaml conf.yaml inline/yes.yaml objectstore/filestore-xfs.yaml tasks/kclient_workunit_snaps.yaml} 3
Failure Reason:

Command failed (workunit test fs/snaps/untar_snap_rm.sh) on smithi175 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=luminous TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/fs/snaps/untar_snap_rm.sh'

fail 1542003 2017-08-19 05:20:38 2017-08-20 08:38:31 2017-08-20 09:52:32 1:14:01 0:55:57 0:18:04 smithi master kcephfs/thrash/{clusters/fixed-3-cephfs.yaml conf.yaml objectstore/bluestore.yaml thrashers/mds.yaml workloads/kclient_workunit_suites_ffsb.yaml} 3
Failure Reason:

"2017-08-20 09:04:08.310037 mon.b mon.0 172.21.15.3:6789/0 110 : cluster [WRN] Health check failed: 1 filesystem is degraded (FS_DEGRADED)" in cluster log

fail 1542004 2017-08-19 05:20:39 2017-08-20 08:39:37 2017-08-20 09:59:33 1:19:56 0:09:21 1:10:35 smithi master kcephfs/recovery/{clusters/4-remote-clients.yaml debug/mds_client.yaml dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore/filestore-xfs.yaml tasks/mds-full.yaml} 4
Failure Reason:

Command failed on smithi169 with status 1: 'sudo cp /var/lib/ceph/osd/ceph-0/fsid /tmp/tmpGAwHdX'

pass 1542005 2017-08-19 05:20:40 2017-08-20 08:41:44 2017-08-20 10:09:31 1:27:47 1:05:21 0:22:26 smithi master kcephfs/cephfs/{clusters/fixed-3-cephfs.yaml conf.yaml inline/no.yaml objectstore/bluestore.yaml tasks/kclient_workunit_suites_dbench.yaml} 3
pass 1542006 2017-08-19 05:20:40 2017-08-20 08:44:59 2017-08-20 09:22:59 0:38:00 0:30:32 0:07:28 smithi master kcephfs/cephfs/{clusters/fixed-3-cephfs.yaml conf.yaml inline/yes.yaml objectstore/filestore-xfs.yaml tasks/kclient_workunit_suites_ffsb.yaml} 3
fail 1542007 2017-08-19 05:20:41 2017-08-20 08:45:49 2017-08-20 14:19:53 5:34:04 0:12:33 5:21:31 smithi master kcephfs/recovery/{clusters/4-remote-clients.yaml debug/mds_client.yaml dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore/bluestore.yaml tasks/pool-perm.yaml} 4
Failure Reason:

"2017-08-20 14:10:55.768138 mon.a mon.0 172.21.15.51:6789/0 220 : cluster [WRN] Health check failed: 1 filesystem is degraded (FS_DEGRADED)" in cluster log

pass 1542008 2017-08-19 05:20:41 2017-08-20 08:46:54 2017-08-20 10:22:42 1:35:48 0:34:23 1:01:25 smithi master kcephfs/cephfs/{clusters/fixed-3-cephfs.yaml conf.yaml inline/no.yaml objectstore/bluestore.yaml tasks/kclient_workunit_suites_fsstress.yaml} 3
pass 1542009 2017-08-19 05:20:42 2017-08-20 08:47:20 2017-08-20 09:43:20 0:56:00 0:21:42 0:34:18 smithi master kcephfs/cephfs/{clusters/fixed-3-cephfs.yaml conf.yaml inline/yes.yaml objectstore/filestore-xfs.yaml tasks/kclient_workunit_suites_fsx.yaml} 3
fail 1542010 2017-08-19 05:20:43 2017-08-20 08:47:30 2017-08-20 10:05:30 1:18:00 0:16:34 1:01:26 smithi master kcephfs/recovery/{clusters/4-remote-clients.yaml debug/mds_client.yaml dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore/filestore-xfs.yaml tasks/sessionmap.yaml} 4
Failure Reason:

"2017-08-20 09:50:56.267828 mon.a mon.0 172.21.15.92:6789/0 202 : cluster [WRN] Health check failed: 1 filesystem is degraded (FS_DEGRADED)" in cluster log

fail 1542011 2017-08-19 05:20:43 2017-08-20 08:48:01 2017-08-20 09:27:56 0:39:55 0:25:54 0:14:01 smithi master kcephfs/thrash/{clusters/fixed-3-cephfs.yaml conf.yaml objectstore/filestore-xfs.yaml thrashers/mon.yaml workloads/kclient_workunit_suites_iozone.yaml} 3
Failure Reason:

"2017-08-20 09:04:14.841425 mon.a mon.1 172.21.15.169:6789/0 96 : cluster [WRN] Health check failed: 1/3 mons down, quorum a,c (MON_DOWN)" in cluster log

pass 1542012 2017-08-19 05:20:44 2017-08-20 08:50:52 2017-08-20 10:08:51 1:17:59 0:13:16 1:04:43 smithi master kcephfs/cephfs/{clusters/fixed-3-cephfs.yaml conf.yaml inline/no.yaml objectstore/bluestore.yaml tasks/kclient_workunit_suites_fsync.yaml} 3
fail 1542013 2017-08-19 05:20:44 2017-08-20 08:51:05 2017-08-20 11:13:06 2:22:01 0:27:15 1:54:46 smithi master kcephfs/recovery/{clusters/4-remote-clients.yaml debug/mds_client.yaml dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore/bluestore.yaml tasks/strays.yaml} 4
Failure Reason:

Test failure: test_files_throttle (tasks.cephfs.test_strays.TestStrays)

pass 1542014 2017-08-19 05:20:45 2017-08-20 08:51:14 2017-08-20 09:57:14 1:06:00 0:23:12 0:42:48 smithi master kcephfs/cephfs/{clusters/fixed-3-cephfs.yaml conf.yaml inline/yes.yaml objectstore/filestore-xfs.yaml tasks/kclient_workunit_suites_iozone.yaml} 3
pass 1542015 2017-08-19 05:20:46 2017-08-20 08:51:43 2017-08-20 12:09:44 3:18:01 3:03:13 0:14:48 smithi master kcephfs/cephfs/{clusters/fixed-3-cephfs.yaml conf.yaml inline/no.yaml objectstore/bluestore.yaml tasks/kclient_workunit_suites_pjd.yaml} 3
fail 1542016 2017-08-19 05:20:46 2017-08-20 08:51:58 2017-08-20 15:38:05 6:46:07 0:29:25 6:16:42 smithi master kcephfs/recovery/{clusters/4-remote-clients.yaml debug/mds_client.yaml dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore/filestore-xfs.yaml tasks/volume-client.yaml} 4
Failure Reason:

"2017-08-20 15:11:29.683911 mon.a mon.0 172.21.15.103:6789/0 199 : cluster [WRN] Health check failed: 1 filesystem is degraded (FS_DEGRADED)" in cluster log

pass 1542017 2017-08-19 05:20:47 2017-08-20 08:53:25 2017-08-20 09:33:15 0:39:50 0:09:14 0:30:36 smithi master kcephfs/cephfs/{clusters/fixed-3-cephfs.yaml conf.yaml inline/yes.yaml objectstore/filestore-xfs.yaml tasks/kclient_workunit_trivial_sync.yaml} 3
pass 1542018 2017-08-19 05:20:47 2017-08-20 08:53:16 2017-08-20 09:09:15 0:15:59 0:12:10 0:03:49 smithi master kcephfs/cephfs/{clusters/fixed-3-cephfs.yaml conf.yaml inline/yes.yaml objectstore/bluestore.yaml tasks/kclient_workunit_direct_io.yaml} 3
fail 1542019 2017-08-19 05:20:48 2017-08-20 08:53:24 2017-08-20 11:35:25 2:42:01 0:52:59 1:49:02 smithi master kcephfs/mixed-clients/{clusters/2-clients.yaml conf.yaml objectstore/filestore-xfs.yaml tasks/kernel_cfuse_workunits_dbench_iozone.yaml} 4
Failure Reason:

"2017-08-20 11:00:11.620706 mon.b mon.0 172.21.15.24:6789/0 206 : cluster [WRN] Health check failed: 1/3 mons down, quorum b,c (MON_DOWN)" in cluster log

fail 1542020 2017-08-19 05:20:49 2017-08-20 08:55:41 2017-08-20 10:29:40 1:33:59 0:14:15 1:19:44 smithi master kcephfs/recovery/{clusters/4-remote-clients.yaml debug/mds_client.yaml dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore/filestore-xfs.yaml tasks/auto-repair.yaml} 4
Failure Reason:

"2017-08-20 10:17:04.821533 mon.a mon.0 172.21.15.8:6789/0 211 : cluster [WRN] Health check failed: 1 filesystem is degraded (FS_DEGRADED)" in cluster log

fail 1542021 2017-08-19 05:20:49 2017-08-20 08:58:03 2017-08-20 09:58:02 0:59:59 0:39:11 0:20:48 smithi master kcephfs/thrash/{clusters/fixed-3-cephfs.yaml conf.yaml objectstore/filestore-xfs.yaml thrashers/default.yaml workloads/kclient_workunit_suites_ffsb.yaml} 3
Failure Reason:

"2017-08-20 09:27:22.232913 mon.a mon.0 172.21.15.149:6789/0 263 : cluster [ERR] Health check failed: full ratio(s) out of order (OSD_OUT_OF_ORDER_FULL)" in cluster log

pass 1542022 2017-08-19 05:20:50 2017-08-20 09:00:05 2017-08-20 10:12:05 1:12:00 1:01:19 0:10:41 smithi master kcephfs/cephfs/{clusters/fixed-3-cephfs.yaml conf.yaml inline/no.yaml objectstore/filestore-xfs.yaml tasks/kclient_workunit_kernel_untar_build.yaml} 3
fail 1542023 2017-08-19 05:20:50 2017-08-20 09:00:07 2017-08-20 09:52:05 0:51:58 0:12:49 0:39:09 smithi master kcephfs/recovery/{clusters/4-remote-clients.yaml debug/mds_client.yaml dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore/bluestore.yaml tasks/backtrace.yaml} 4
Failure Reason:

"2017-08-20 09:41:52.829366 mon.a mon.0 172.21.15.75:6789/0 199 : cluster [WRN] Health check failed: 1 filesystem is degraded (FS_DEGRADED)" in cluster log

fail 1542024 2017-08-19 05:20:51 2017-08-20 09:01:18 2017-08-20 10:37:16 1:35:58 0:40:58 0:55:00 smithi master kcephfs/cephfs/{clusters/fixed-3-cephfs.yaml conf.yaml inline/yes.yaml objectstore/bluestore.yaml tasks/kclient_workunit_misc.yaml} 3
Failure Reason:

"2017-08-20 10:01:40.871670 mon.b mon.0 172.21.15.73:6789/0 119 : cluster [WRN] Health check failed: 1/3 mons down, quorum b,c (MON_DOWN)" in cluster log

pass 1542025 2017-08-19 05:20:52 2017-08-20 09:03:06 2017-08-20 10:11:08 1:08:02 0:26:12 0:41:50 smithi master kcephfs/cephfs/{clusters/fixed-3-cephfs.yaml conf.yaml inline/no.yaml objectstore/filestore-xfs.yaml tasks/kclient_workunit_o_trunc.yaml} 3
fail 1542026 2017-08-19 05:20:52 2017-08-20 09:04:02 2017-08-20 13:02:04 3:58:02 0:21:23 3:36:39 smithi master kcephfs/recovery/{clusters/4-remote-clients.yaml debug/mds_client.yaml dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore/filestore-xfs.yaml tasks/client-limits.yaml} 4
Failure Reason:

"2017-08-20 12:45:19.418850 mon.a mon.0 172.21.15.172:6789/0 206 : cluster [WRN] Health check failed: 1 filesystem is degraded (FS_DEGRADED)" in cluster log

fail 1542027 2017-08-19 05:20:53 2017-08-20 09:05:30 2017-08-20 13:29:33 4:24:03 3:54:57 0:29:06 smithi master kcephfs/cephfs/{clusters/fixed-3-cephfs.yaml conf.yaml inline/yes.yaml objectstore/bluestore.yaml tasks/kclient_workunit_snaps.yaml} 3
Failure Reason:

Command failed (workunit test fs/snaps/untar_snap_rm.sh) on smithi095 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=luminous TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/fs/snaps/untar_snap_rm.sh'

fail 1542028 2017-08-19 05:20:53 2017-08-20 09:07:28 2017-08-20 09:45:25 0:37:57 0:15:22 0:22:35 smithi master kcephfs/thrash/{clusters/fixed-3-cephfs.yaml conf.yaml objectstore/bluestore.yaml thrashers/mds.yaml workloads/kclient_workunit_suites_iozone.yaml} 3
Failure Reason:

"2017-08-20 09:34:32.218821 mon.a mon.0 172.21.15.84:6789/0 121 : cluster [WRN] Health check failed: 1 filesystem is degraded (FS_DEGRADED)" in cluster log

dead 1542029 2017-08-19 05:20:54 2017-08-20 09:09:27 2017-08-20 21:17:40 12:08:13 smithi master kcephfs/recovery/{clusters/4-remote-clients.yaml debug/mds_client.yaml dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore/bluestore.yaml tasks/client-recovery.yaml} 4
pass 1542030 2017-08-19 05:20:55 2017-08-20 09:11:27 2017-08-20 10:53:33 1:42:06 0:48:07 0:53:59 smithi master kcephfs/cephfs/{clusters/fixed-3-cephfs.yaml conf.yaml inline/no.yaml objectstore/filestore-xfs.yaml tasks/kclient_workunit_suites_dbench.yaml} 3
fail 1542031 2017-08-19 05:20:55 2017-08-20 09:14:16 2017-08-20 10:14:15 0:59:59 0:51:15 0:08:44 smithi master kcephfs/cephfs/{clusters/fixed-3-cephfs.yaml conf.yaml inline/yes.yaml objectstore/bluestore.yaml tasks/kclient_workunit_suites_ffsb.yaml} 3
Failure Reason:

"2017-08-20 09:31:47.735258 mon.b mon.0 172.21.15.5:6789/0 247 : cluster [WRN] Health check failed: 1/3 mons down, quorum b,c (MON_DOWN)" in cluster log

fail 1542032 2017-08-19 05:20:56 2017-08-20 09:15:29 2017-08-20 13:35:30 4:20:01 0:15:10 4:04:51 smithi master kcephfs/recovery/{clusters/4-remote-clients.yaml debug/mds_client.yaml dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore/filestore-xfs.yaml tasks/config-commands.yaml} 4
Failure Reason:

"2017-08-20 13:24:31.052044 mon.a mon.0 172.21.15.150:6789/0 199 : cluster [WRN] Health check failed: 1 filesystem is degraded (FS_DEGRADED)" in cluster log

pass 1542033 2017-08-19 05:20:56 2017-08-20 09:15:33 2017-08-20 09:53:25 0:37:52 0:12:48 0:25:04 smithi master kcephfs/cephfs/{clusters/fixed-3-cephfs.yaml conf.yaml inline/no.yaml objectstore/filestore-xfs.yaml tasks/kclient_workunit_suites_fsstress.yaml} 3
pass 1542034 2017-08-19 05:20:57 2017-08-20 09:15:31 2017-08-20 09:47:27 0:31:56 0:26:50 0:05:06 smithi master kcephfs/cephfs/{clusters/fixed-3-cephfs.yaml conf.yaml inline/yes.yaml objectstore/bluestore.yaml tasks/kclient_workunit_suites_fsx.yaml} 3
fail 1542035 2017-08-19 05:20:58 2017-08-20 09:17:57 2017-08-20 16:22:15 7:04:18 0:33:57 6:30:21 smithi master kcephfs/recovery/{clusters/4-remote-clients.yaml debug/mds_client.yaml dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore/bluestore.yaml tasks/damage.yaml} 4
Failure Reason:

"2017-08-20 15:59:22.742059 mon.a mon.0 172.21.15.100:6789/0 393 : cluster [ERR] Health check failed: 1 MDSs report damaged metadata (MDS_DAMAGE)" in cluster log

fail 1542036 2017-08-19 05:20:58 2017-08-20 09:19:06 2017-08-20 10:07:07 0:48:01 0:32:57 0:15:04 smithi master kcephfs/thrash/{clusters/fixed-3-cephfs.yaml conf.yaml objectstore/filestore-xfs.yaml thrashers/mon.yaml workloads/kclient_workunit_suites_ffsb.yaml} 3
Failure Reason:

"2017-08-20 09:36:02.043569 mon.a mon.0 172.21.15.72:6789/0 185 : cluster [WRN] Health check failed: 1/3 mons down, quorum a,c (MON_DOWN)" in cluster log

pass 1542037 2017-08-19 05:20:59 2017-08-20 09:20:31 2017-08-20 09:48:26 0:27:55 0:14:01 0:13:54 smithi master kcephfs/cephfs/{clusters/fixed-3-cephfs.yaml conf.yaml inline/no.yaml objectstore/filestore-xfs.yaml tasks/kclient_workunit_suites_fsync.yaml} 3
fail 1542038 2017-08-19 05:20:59 2017-08-20 09:20:26 2017-08-20 11:42:26 2:22:00 0:35:39 1:46:21 smithi master kcephfs/recovery/{clusters/4-remote-clients.yaml debug/mds_client.yaml dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore/filestore-xfs.yaml tasks/data-scan.yaml} 4
Failure Reason:

Test failure: test_rebuild_simple_altpool (tasks.cephfs.test_data_scan.TestDataScan)

pass 1542039 2017-08-19 05:21:00 2017-08-20 09:21:04 2017-08-20 10:05:03 0:43:59 0:12:14 0:31:45 smithi master kcephfs/cephfs/{clusters/fixed-3-cephfs.yaml conf.yaml inline/yes.yaml objectstore/bluestore.yaml tasks/kclient_workunit_suites_iozone.yaml} 3
pass 1542040 2017-08-19 05:21:00 2017-08-20 09:23:12 2017-08-20 12:07:15 2:44:03 2:21:59 0:22:04 smithi master kcephfs/cephfs/{clusters/fixed-3-cephfs.yaml conf.yaml inline/no.yaml objectstore/filestore-xfs.yaml tasks/kclient_workunit_suites_pjd.yaml} 3
fail 1542041 2017-08-19 05:21:01 2017-08-20 09:24:26 2017-08-20 11:12:26 1:48:00 1:15:58 0:32:02 smithi master kcephfs/recovery/{clusters/4-remote-clients.yaml debug/mds_client.yaml dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore/bluestore.yaml tasks/failover.yaml} 4
Failure Reason:

"2017-08-20 09:53:13.659530 mon.a mon.0 172.21.15.32:6789/0 220 : cluster [WRN] Health check failed: 1 filesystem is degraded (FS_DEGRADED)" in cluster log

pass 1542042 2017-08-19 05:21:02 2017-08-20 09:25:56 2017-08-20 09:45:52 0:19:56 0:16:48 0:03:08 smithi master kcephfs/cephfs/{clusters/fixed-3-cephfs.yaml conf.yaml inline/yes.yaml objectstore/bluestore.yaml tasks/kclient_workunit_trivial_sync.yaml} 3
pass 1542043 2017-08-19 05:21:02 2017-08-20 09:25:56 2017-08-20 09:47:54 0:21:58 0:13:17 0:08:41 smithi master kcephfs/cephfs/{clusters/fixed-3-cephfs.yaml conf.yaml inline/no.yaml objectstore/filestore-xfs.yaml tasks/kclient_workunit_direct_io.yaml} 3
fail 1542044 2017-08-19 05:21:03 2017-08-20 09:28:01 2017-08-20 12:26:01 2:58:00 0:54:12 2:03:48 smithi master kcephfs/mixed-clients/{clusters/2-clients.yaml conf.yaml objectstore/bluestore.yaml tasks/kernel_cfuse_workunits_untarbuild_blogbench.yaml} 4
Failure Reason:

"2017-08-20 11:36:42.951149 mon.b mon.0 172.21.15.17:6789/0 202 : cluster [WRN] overall HEALTH_WARN 1/3 mons down, quorum b,c" in cluster log

fail 1542045 2017-08-19 05:21:03 2017-08-20 09:29:22 2017-08-20 10:37:26 1:08:04 0:20:16 0:47:48 smithi master kcephfs/recovery/{clusters/4-remote-clients.yaml debug/mds_client.yaml dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore/filestore-xfs.yaml tasks/forward-scrub.yaml} 4
Failure Reason:

"2017-08-20 10:23:08.038599 mon.a mon.0 172.21.15.130:6789/0 460 : cluster [ERR] Health check failed: 1 MDSs report damaged metadata (MDS_DAMAGE)" in cluster log

fail 1542046 2017-08-19 05:21:04 2017-08-20 09:29:51 2017-08-20 10:51:40 1:21:49 0:33:42 0:48:07 smithi master kcephfs/thrash/{clusters/fixed-3-cephfs.yaml conf.yaml objectstore/bluestore.yaml thrashers/default.yaml workloads/kclient_workunit_suites_iozone.yaml} 3
Failure Reason:

"2017-08-20 10:32:57.437214 mon.b mon.0 172.21.15.15:6789/0 110 : cluster [WRN] Health check failed: noscrub flag(s) set (OSDMAP_FLAGS)" in cluster log

fail 1542047 2017-08-19 05:21:05 2017-08-20 09:30:00 2017-08-20 10:35:58 1:05:58 0:42:15 0:23:43 smithi master kcephfs/cephfs/{clusters/fixed-3-cephfs.yaml conf.yaml inline/yes.yaml objectstore/bluestore.yaml tasks/kclient_workunit_kernel_untar_build.yaml} 3
Failure Reason:

"2017-08-20 10:01:58.578576 mon.b mon.1 172.21.15.29:6789/0 37 : cluster [WRN] Health check failed: 1/3 mons down, quorum b,c (MON_DOWN)" in cluster log

fail 1542048 2017-08-19 05:21:05 2017-08-20 09:33:25 2017-08-20 10:39:33 1:06:08 0:28:39 0:37:29 smithi master kcephfs/recovery/{clusters/4-remote-clients.yaml debug/mds_client.yaml dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore/bluestore.yaml tasks/journal-repair.yaml} 4
Failure Reason:

"2017-08-20 10:15:55.907178 mon.a mon.0 172.21.15.82:6789/0 393 : cluster [ERR] Health check failed: 1 MDSs report damaged metadata (MDS_DAMAGE)" in cluster log

pass 1542049 2017-08-19 05:21:06 2017-08-20 09:33:18 2017-08-20 10:45:18 1:12:00 0:38:29 0:33:31 smithi master kcephfs/cephfs/{clusters/fixed-3-cephfs.yaml conf.yaml inline/no.yaml objectstore/filestore-xfs.yaml tasks/kclient_workunit_misc.yaml} 3
pass 1542050 2017-08-19 05:21:07 2017-08-20 09:33:41 2017-08-20 10:07:40 0:33:59 0:21:44 0:12:15 smithi master kcephfs/cephfs/{clusters/fixed-3-cephfs.yaml conf.yaml inline/yes.yaml objectstore/bluestore.yaml tasks/kclient_workunit_o_trunc.yaml} 3
fail 1542051 2017-08-19 05:21:07 2017-08-20 09:37:31 2017-08-20 10:33:27 0:55:56 0:13:26 0:42:30 smithi master kcephfs/recovery/{clusters/4-remote-clients.yaml debug/mds_client.yaml dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore/filestore-xfs.yaml tasks/mds-flush.yaml} 4
Failure Reason:

"2017-08-20 10:24:21.410869 mon.a mon.0 172.21.15.26:6789/0 210 : cluster [WRN] Health check failed: 1 filesystem is degraded (FS_DEGRADED)" in cluster log

dead 1542052 2017-08-19 05:21:08 2017-08-20 09:37:46 2017-08-20 21:44:40 12:06:54 smithi master kcephfs/cephfs/{clusters/fixed-3-cephfs.yaml conf.yaml inline/no.yaml objectstore/filestore-xfs.yaml tasks/kclient_workunit_snaps.yaml} 3
fail 1542053 2017-08-19 05:21:08 2017-08-20 09:40:18 2017-08-20 10:50:17 1:09:59 0:34:15 0:35:44 smithi master kcephfs/thrash/{clusters/fixed-3-cephfs.yaml conf.yaml objectstore/filestore-xfs.yaml thrashers/mds.yaml workloads/kclient_workunit_suites_ffsb.yaml} 3
Failure Reason:

"2017-08-20 10:21:50.031256 mon.a mon.0 172.21.15.1:6789/0 197 : cluster [WRN] Health check failed: 1 filesystem is degraded (FS_DEGRADED)" in cluster log

fail 1542054 2017-08-19 05:21:09 2017-08-20 09:42:06 2017-08-20 10:33:55 0:51:49 0:08:51 0:42:58 smithi master kcephfs/recovery/{clusters/4-remote-clients.yaml debug/mds_client.yaml dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore/bluestore.yaml tasks/mds-full.yaml} 4
Failure Reason:

Command failed on smithi070 with status 1: 'sudo cp /var/lib/ceph/osd/ceph-0/fsid /tmp/tmpVefGsm'

pass 1542055 2017-08-19 05:21:10 2017-08-20 09:43:27 2017-08-20 10:59:25 1:15:58 0:59:20 0:16:38 smithi master kcephfs/cephfs/{clusters/fixed-3-cephfs.yaml conf.yaml inline/yes.yaml objectstore/bluestore.yaml tasks/kclient_workunit_suites_dbench.yaml} 3
fail 1542056 2017-08-19 05:21:10 2017-08-20 09:44:06 2017-08-20 10:32:04 0:47:58 0:34:43 0:13:15 smithi master kcephfs/cephfs/{clusters/fixed-3-cephfs.yaml conf.yaml inline/no.yaml objectstore/filestore-xfs.yaml tasks/kclient_workunit_suites_ffsb.yaml} 3
Failure Reason:

"2017-08-20 10:02:49.816117 mon.a mon.0 172.21.15.115:6789/0 201 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)" in cluster log

fail 1542057 2017-08-19 05:21:11 2017-08-20 09:44:12 2017-08-20 12:04:06 2:19:54 0:19:41 2:00:13 smithi master kcephfs/recovery/{clusters/4-remote-clients.yaml debug/mds_client.yaml dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore/filestore-xfs.yaml tasks/pool-perm.yaml} 4
Failure Reason:

"2017-08-20 11:47:31.679981 mon.a mon.0 172.21.15.56:6789/0 199 : cluster [WRN] Health check failed: 1 filesystem is degraded (FS_DEGRADED)" in cluster log

pass 1542058 2017-08-19 05:21:11 2017-08-20 09:44:22 2017-08-20 10:16:21 0:31:59 0:14:11 0:17:48 smithi master kcephfs/cephfs/{clusters/fixed-3-cephfs.yaml conf.yaml inline/yes.yaml objectstore/bluestore.yaml tasks/kclient_workunit_suites_fsstress.yaml} 3
fail 1542059 2017-08-19 05:21:12 2017-08-20 09:45:30 2017-08-20 10:31:16 0:45:46 0:28:46 0:17:00 smithi master kcephfs/cephfs/{clusters/fixed-3-cephfs.yaml conf.yaml inline/no.yaml objectstore/filestore-xfs.yaml tasks/kclient_workunit_suites_fsx.yaml} 3
Failure Reason:

SELinux denials found on ubuntu@smithi003.front.sepia.ceph.com: ['type=AVC msg=audit(1503223574.052:4365): avc: denied { getattr } for pid=22825 comm="ceph-osd" path="/dev/nvme0n1p2" dev="devtmpfs" ino=58954 scontext=system_u:system_r:ceph_t:s0 tcontext=system_u:object_r:nvme_device_t:s0 tclass=blk_file permissive=1']

fail 1542060 2017-08-19 05:21:13 2017-08-20 09:45:45 2017-08-20 16:03:56 6:18:11 0:35:12 5:42:59 smithi master kcephfs/recovery/{clusters/4-remote-clients.yaml debug/mds_client.yaml dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore/bluestore.yaml tasks/sessionmap.yaml} 4
Failure Reason:

"2017-08-20 15:36:09.936306 mon.a mon.0 172.21.15.197:6789/0 210 : cluster [WRN] Health check failed: 1 filesystem is degraded (FS_DEGRADED)" in cluster log

fail 1542061 2017-08-19 05:21:13 2017-08-20 09:45:56 2017-08-20 10:05:55 0:19:59 0:13:56 0:06:03 smithi master kcephfs/thrash/{clusters/fixed-3-cephfs.yaml conf.yaml objectstore/bluestore.yaml thrashers/mon.yaml workloads/kclient_workunit_suites_iozone.yaml} 3
Failure Reason:

"2017-08-20 09:54:19.864407 mon.a mon.0 172.21.15.183:6789/0 4 : cluster [WRN] overall HEALTH_WARN 1/3 mons down, quorum b,c" in cluster log

pass 1542062 2017-08-19 05:21:14 2017-08-20 09:47:40 2017-08-20 10:07:39 0:19:59 0:15:18 0:04:41 smithi master kcephfs/cephfs/{clusters/fixed-3-cephfs.yaml conf.yaml inline/yes.yaml objectstore/bluestore.yaml tasks/kclient_workunit_suites_fsync.yaml} 3
fail 1542063 2017-08-19 05:21:14 2017-08-20 09:48:00 2017-08-20 12:52:01 3:04:01 1:03:13 2:00:48 smithi master kcephfs/recovery/{clusters/4-remote-clients.yaml debug/mds_client.yaml dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore/filestore-xfs.yaml tasks/strays.yaml} 4
Failure Reason:

"2017-08-20 11:52:31.904885 mon.a mon.0 172.21.15.150:6789/0 220 : cluster [WRN] Health check failed: 1 filesystem is degraded (FS_DEGRADED)" in cluster log

pass 1542064 2017-08-19 05:21:15 2017-08-20 09:48:39 2017-08-20 10:36:32 0:47:53 0:26:22 0:21:31 smithi master kcephfs/cephfs/{clusters/fixed-3-cephfs.yaml conf.yaml inline/no.yaml objectstore/filestore-xfs.yaml tasks/kclient_workunit_suites_iozone.yaml} 3
pass 1542065 2017-08-19 05:21:15 2017-08-20 09:49:53 2017-08-20 11:53:48 2:03:55 1:44:15 0:19:40 smithi master kcephfs/cephfs/{clusters/fixed-3-cephfs.yaml conf.yaml inline/yes.yaml objectstore/bluestore.yaml tasks/kclient_workunit_suites_pjd.yaml} 3
fail 1542066 2017-08-19 05:21:16 2017-08-20 09:50:04 2017-08-20 12:04:03 2:13:59 0:36:46 1:37:13 smithi master kcephfs/recovery/{clusters/4-remote-clients.yaml debug/mds_client.yaml dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore/bluestore.yaml tasks/volume-client.yaml} 4
Failure Reason:

"2017-08-20 11:35:00.258801 mon.a mon.0 172.21.15.53:6789/0 208 : cluster [WRN] Health check failed: 1 filesystem is degraded (FS_DEGRADED)" in cluster log

pass 1542067 2017-08-19 05:21:17 2017-08-20 09:52:12 2017-08-20 10:18:11 0:25:59 0:11:28 0:14:31 smithi master kcephfs/cephfs/{clusters/fixed-3-cephfs.yaml conf.yaml inline/no.yaml objectstore/filestore-xfs.yaml tasks/kclient_workunit_trivial_sync.yaml} 3