Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
pass 1583082 2017-08-31 05:20:18 2017-09-02 04:21:24 2017-09-02 05:11:25 0:50:01 0:12:00 0:38:01 smithi master kcephfs/cephfs/{clusters/fixed-3-cephfs.yaml conf.yaml inline/no.yaml objectstore/bluestore.yaml tasks/kclient_workunit_direct_io.yaml} 3
pass 1583083 2017-08-31 05:20:18 2017-09-02 04:21:25 2017-09-02 05:37:25 1:16:00 1:06:20 0:09:40 smithi master kcephfs/mixed-clients/{clusters/2-clients.yaml conf.yaml objectstore/bluestore.yaml tasks/kernel_cfuse_workunits_dbench_iozone.yaml} 4
fail 1583084 2017-08-31 05:20:19 2017-09-02 04:21:41 2017-09-02 05:15:33 0:53:52 0:18:53 0:34:59 smithi master kcephfs/recovery/{clusters/4-remote-clients.yaml debug/mds_client.yaml dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore/bluestore.yaml tasks/auto-repair.yaml} 4
Failure Reason:

"2017-09-02 05:02:53.802194 mon.a mon.0 172.21.15.55:6789/0 205 : cluster [WRN] Health check failed: 1 filesystem is degraded (FS_DEGRADED)" in cluster log

fail 1583085 2017-08-31 05:20:20 2017-09-02 04:21:36 2017-09-02 05:45:36 1:24:00 0:59:04 0:24:56 smithi master kcephfs/thrash/{clusters/fixed-3-cephfs.yaml conf.yaml objectstore/bluestore.yaml thrashers/default.yaml workloads/kclient_workunit_suites_ffsb.yaml} 3
Failure Reason:

"2017-09-02 04:46:21.740862 mon.b mon.0 172.21.15.59:6789/0 168 : cluster [WRN] Health check failed: noscrub flag(s) set (OSDMAP_FLAGS)" in cluster log

pass 1583086 2017-08-31 05:20:20 2017-09-02 04:22:13 2017-09-02 05:10:11 0:47:58 0:34:56 0:13:02 smithi master kcephfs/cephfs/{clusters/fixed-3-cephfs.yaml conf.yaml inline/yes.yaml objectstore/filestore-xfs.yaml tasks/kclient_workunit_kernel_untar_build.yaml} 3
fail 1583087 2017-08-31 05:20:21 2017-09-02 04:24:15 2017-09-02 04:56:13 0:31:58 0:13:57 0:18:01 smithi master kcephfs/recovery/{clusters/4-remote-clients.yaml debug/mds_client.yaml dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore/filestore-xfs.yaml tasks/backtrace.yaml} 4
Failure Reason:

"2017-09-02 04:47:01.384280 mon.a mon.0 172.21.15.138:6789/0 197 : cluster [WRN] Health check failed: 1 filesystem is degraded (FS_DEGRADED)" in cluster log

pass 1583088 2017-08-31 05:20:22 2017-09-02 04:24:20 2017-09-02 05:12:19 0:47:59 0:38:04 0:09:55 smithi master kcephfs/cephfs/{clusters/fixed-3-cephfs.yaml conf.yaml inline/no.yaml objectstore/bluestore.yaml tasks/kclient_workunit_misc.yaml} 3
pass 1583089 2017-08-31 05:20:22 2017-09-02 04:26:03 2017-09-02 05:05:57 0:39:54 0:19:54 0:20:00 smithi master kcephfs/cephfs/{clusters/fixed-3-cephfs.yaml conf.yaml inline/yes.yaml objectstore/filestore-xfs.yaml tasks/kclient_workunit_o_trunc.yaml} 3
fail 1583090 2017-08-31 05:20:23 2017-09-02 04:27:38 2017-09-02 05:51:35 1:23:57 0:17:13 1:06:44 smithi master kcephfs/recovery/{clusters/4-remote-clients.yaml debug/mds_client.yaml dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore/bluestore.yaml tasks/client-limits.yaml} 4
Failure Reason:

"2017-09-02 05:35:53.628170 mon.a mon.0 172.21.15.169:6789/0 207 : cluster [WRN] Health check failed: 1 filesystem is degraded (FS_DEGRADED)" in cluster log

fail 1583091 2017-08-31 05:20:24 2017-09-02 04:27:56 2017-09-02 08:34:00 4:06:04 3:54:11 0:11:53 smithi master kcephfs/cephfs/{clusters/fixed-3-cephfs.yaml conf.yaml inline/no.yaml objectstore/bluestore.yaml tasks/kclient_workunit_snaps.yaml} 3
Failure Reason:

Command failed (workunit test fs/snaps/untar_snap_rm.sh) on smithi066 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=luminous TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/fs/snaps/untar_snap_rm.sh'

fail 1583092 2017-08-31 05:20:24 2017-09-02 04:29:55 2017-09-02 05:10:13 0:40:18 0:24:27 0:15:51 smithi master kcephfs/thrash/{clusters/fixed-3-cephfs.yaml conf.yaml objectstore/filestore-xfs.yaml thrashers/mds.yaml workloads/kclient_workunit_suites_iozone.yaml} 3
Failure Reason:

"2017-09-02 04:49:23.664917 mon.b mon.0 172.21.15.38:6789/0 117 : cluster [WRN] Health check failed: 1 filesystem is degraded (FS_DEGRADED)" in cluster log

dead 1583093 2017-08-31 05:20:25 2017-09-02 04:30:06 2017-09-02 16:36:31 12:06:25 smithi master kcephfs/recovery/{clusters/4-remote-clients.yaml debug/mds_client.yaml dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore/filestore-xfs.yaml tasks/client-recovery.yaml} 4
fail 1583094 2017-08-31 05:20:26 2017-09-02 04:30:20 2017-09-02 06:04:19 1:33:59 0:49:25 0:44:34 smithi master kcephfs/cephfs/{clusters/fixed-3-cephfs.yaml conf.yaml inline/yes.yaml objectstore/filestore-xfs.yaml tasks/kclient_workunit_suites_dbench.yaml} 3
Failure Reason:

"2017-09-02 05:32:32.596741 mon.b mon.0 172.21.15.100:6789/0 288 : cluster [WRN] Health check failed: 1/3 mons down, quorum b,c (MON_DOWN)" in cluster log

fail 1583095 2017-08-31 05:20:26 2017-09-02 04:32:14 2017-09-02 05:32:14 1:00:00 0:46:00 0:14:00 smithi master kcephfs/cephfs/{clusters/fixed-3-cephfs.yaml conf.yaml inline/no.yaml objectstore/bluestore.yaml tasks/kclient_workunit_suites_ffsb.yaml} 3
Failure Reason:

"2017-09-02 04:52:46.918399 mon.b mon.0 172.21.15.137:6789/0 243 : cluster [WRN] Health check failed: 1/3 mons down, quorum b,c (MON_DOWN)" in cluster log

fail 1583096 2017-08-31 05:20:27 2017-09-02 04:32:38 2017-09-02 08:15:06 3:42:28 0:25:22 3:17:06 smithi master kcephfs/recovery/{clusters/4-remote-clients.yaml debug/mds_client.yaml dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore/bluestore.yaml tasks/config-commands.yaml} 4
Failure Reason:

"2017-09-02 07:32:09.014760 mon.a mon.0 172.21.15.134:6789/0 208 : cluster [WRN] Health check failed: 1 filesystem is degraded (FS_DEGRADED)" in cluster log

pass 1583097 2017-08-31 05:20:27 2017-09-02 04:32:33 2017-09-02 05:44:14 1:11:41 0:50:45 0:20:56 smithi master kcephfs/cephfs/{clusters/fixed-3-cephfs.yaml conf.yaml inline/yes.yaml objectstore/filestore-xfs.yaml tasks/kclient_workunit_suites_fsstress.yaml} 3
pass 1583098 2017-08-31 05:20:28 2017-09-02 04:32:50 2017-09-02 06:12:50 1:40:00 0:56:22 0:43:38 smithi master kcephfs/cephfs/{clusters/fixed-3-cephfs.yaml conf.yaml inline/no.yaml objectstore/bluestore.yaml tasks/kclient_workunit_suites_fsx.yaml} 3
fail 1583099 2017-08-31 05:20:29 2017-09-02 04:33:11 2017-09-02 07:09:13 2:36:02 0:33:02 2:03:00 smithi master kcephfs/recovery/{clusters/4-remote-clients.yaml debug/mds_client.yaml dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore/filestore-xfs.yaml tasks/damage.yaml} 4
Failure Reason:

"2017-09-02 06:40:04.334766 mon.a mon.0 172.21.15.176:6789/0 381 : cluster [ERR] Health check failed: 1 MDSs report damaged metadata (MDS_DAMAGE)" in cluster log

fail 1583100 2017-08-31 05:20:29 2017-09-02 04:33:32 2017-09-02 05:33:29 0:59:57 0:48:52 0:11:05 smithi master kcephfs/thrash/{clusters/fixed-3-cephfs.yaml conf.yaml objectstore/bluestore.yaml thrashers/mon.yaml workloads/kclient_workunit_suites_ffsb.yaml} 3
Failure Reason:

"2017-09-02 04:47:46.597559 mon.b mon.1 172.21.15.193:6789/0 92 : cluster [WRN] Health check failed: 1/3 mons down, quorum b,c (MON_DOWN)" in cluster log

pass 1583101 2017-08-31 05:20:30 2017-09-02 04:34:19 2017-09-02 05:00:18 0:25:59 0:12:02 0:13:57 smithi master kcephfs/cephfs/{clusters/fixed-3-cephfs.yaml conf.yaml inline/yes.yaml objectstore/filestore-xfs.yaml tasks/kclient_workunit_suites_fsync.yaml} 3
fail 1583102 2017-08-31 05:20:31 2017-09-02 04:34:23 2017-09-02 05:12:21 0:37:58 0:28:41 0:09:17 smithi master kcephfs/recovery/{clusters/4-remote-clients.yaml debug/mds_client.yaml dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore/bluestore.yaml tasks/data-scan.yaml} 4
Failure Reason:

Test failure: test_rebuild_simple_altpool (tasks.cephfs.test_data_scan.TestDataScan)

pass 1583103 2017-08-31 05:20:31 2017-09-02 04:35:48 2017-09-02 05:31:32 0:55:44 0:13:16 0:42:28 smithi master kcephfs/cephfs/{clusters/fixed-3-cephfs.yaml conf.yaml inline/no.yaml objectstore/bluestore.yaml tasks/kclient_workunit_suites_iozone.yaml} 3
pass 1583104 2017-08-31 05:20:32 2017-09-02 04:36:04 2017-09-02 06:30:05 1:54:01 1:30:45 0:23:16 smithi master kcephfs/cephfs/{clusters/fixed-3-cephfs.yaml conf.yaml inline/yes.yaml objectstore/filestore-xfs.yaml tasks/kclient_workunit_suites_pjd.yaml} 3
fail 1583105 2017-08-31 05:20:33 2017-09-02 04:36:24 2017-09-02 10:24:31 5:48:07 0:28:43 5:19:24 smithi master kcephfs/recovery/{clusters/4-remote-clients.yaml debug/mds_client.yaml dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore/filestore-xfs.yaml tasks/failover.yaml} 4
Failure Reason:

"2017-09-02 09:59:52.393855 mon.a mon.0 172.21.15.158:6789/0 195 : cluster [WRN] Health check failed: 1 filesystem is degraded (FS_DEGRADED)" in cluster log

pass 1583106 2017-08-31 05:20:33 2017-09-02 04:37:38 2017-09-02 05:31:35 0:53:57 0:08:46 0:45:11 smithi master kcephfs/cephfs/{clusters/fixed-3-cephfs.yaml conf.yaml inline/no.yaml objectstore/bluestore.yaml tasks/kclient_workunit_trivial_sync.yaml} 3
pass 1583107 2017-08-31 05:20:34 2017-09-02 04:37:40 2017-09-02 05:01:38 0:23:58 0:14:37 0:09:21 smithi master kcephfs/cephfs/{clusters/fixed-3-cephfs.yaml conf.yaml inline/yes.yaml objectstore/filestore-xfs.yaml tasks/kclient_workunit_direct_io.yaml} 3
fail 1583108 2017-08-31 05:20:34 2017-09-02 04:38:20 2017-09-02 05:46:21 1:08:01 0:55:09 0:12:52 smithi master kcephfs/mixed-clients/{clusters/2-clients.yaml conf.yaml objectstore/filestore-xfs.yaml tasks/kernel_cfuse_workunits_untarbuild_blogbench.yaml} 4
Failure Reason:

"2017-09-02 04:59:12.310616 mon.b mon.0 172.21.15.18:6789/0 147 : cluster [WRN] overall HEALTH_WARN 1/3 mons down, quorum b,c" in cluster log

fail 1583109 2017-08-31 05:20:35 2017-09-02 04:39:26 2017-09-02 05:21:23 0:41:57 0:26:25 0:15:32 smithi master kcephfs/recovery/{clusters/4-remote-clients.yaml debug/mds_client.yaml dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore/bluestore.yaml tasks/forward-scrub.yaml} 4
Failure Reason:

"2017-09-02 05:03:23.310544 mon.a mon.0 172.21.15.21:6789/0 465 : cluster [ERR] Health check failed: 1 MDSs report damaged metadata (MDS_DAMAGE)" in cluster log

fail 1583110 2017-08-31 05:20:36 2017-09-02 04:39:40 2017-09-02 05:37:42 0:58:02 0:37:01 0:21:01 smithi master kcephfs/thrash/{clusters/fixed-3-cephfs.yaml conf.yaml objectstore/filestore-xfs.yaml thrashers/default.yaml workloads/kclient_workunit_suites_iozone.yaml} 3
Failure Reason:

"2017-09-02 05:02:46.015141 mon.a mon.0 172.21.15.50:6789/0 192 : cluster [WRN] Health check failed: noscrub flag(s) set (OSDMAP_FLAGS)" in cluster log

pass 1583111 2017-08-31 05:20:36 2017-09-02 04:40:16 2017-09-02 05:34:17 0:54:01 0:39:51 0:14:10 smithi master kcephfs/cephfs/{clusters/fixed-3-cephfs.yaml conf.yaml inline/no.yaml objectstore/bluestore.yaml tasks/kclient_workunit_kernel_untar_build.yaml} 3
fail 1583112 2017-08-31 05:20:37 2017-09-02 04:40:35 2017-09-02 05:16:35 0:36:00 0:22:24 0:13:36 smithi master kcephfs/recovery/{clusters/4-remote-clients.yaml debug/mds_client.yaml dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore/filestore-xfs.yaml tasks/journal-repair.yaml} 4
Failure Reason:

"2017-09-02 04:58:41.311982 mon.a mon.0 172.21.15.37:6789/0 386 : cluster [ERR] Health check failed: 1 MDSs report damaged metadata (MDS_DAMAGE)" in cluster log

pass 1583113 2017-08-31 05:20:38 2017-09-02 04:41:24 2017-09-02 05:43:22 1:01:58 0:43:41 0:18:17 smithi master kcephfs/cephfs/{clusters/fixed-3-cephfs.yaml conf.yaml inline/yes.yaml objectstore/filestore-xfs.yaml tasks/kclient_workunit_misc.yaml} 3
pass 1583114 2017-08-31 05:20:38 2017-09-02 04:43:47 2017-09-02 05:31:39 0:47:52 0:22:52 0:25:00 smithi master kcephfs/cephfs/{clusters/fixed-3-cephfs.yaml conf.yaml inline/no.yaml objectstore/bluestore.yaml tasks/kclient_workunit_o_trunc.yaml} 3
fail 1583115 2017-08-31 05:20:39 2017-09-02 04:43:55 2017-09-02 06:17:51 1:33:56 0:12:00 1:21:56 smithi master kcephfs/recovery/{clusters/4-remote-clients.yaml debug/mds_client.yaml dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore/bluestore.yaml tasks/mds-flush.yaml} 4
Failure Reason:

"2017-09-02 06:09:18.801094 mon.a mon.0 172.21.15.39:6789/0 191 : cluster [WRN] Health check failed: 1 filesystem is degraded (FS_DEGRADED)" in cluster log

fail 1583116 2017-08-31 05:20:39 2017-09-02 04:44:17 2017-09-02 08:10:08 3:25:51 3:02:37 0:23:14 smithi master kcephfs/cephfs/{clusters/fixed-3-cephfs.yaml conf.yaml inline/yes.yaml objectstore/filestore-xfs.yaml tasks/kclient_workunit_snaps.yaml} 3
Failure Reason:

"2017-09-02 07:26:55.102847 mon.a mon.0 172.21.15.2:6789/0 354 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)" in cluster log

fail 1583117 2017-08-31 05:20:40 2017-09-02 04:46:05 2017-09-02 06:22:06 1:36:01 1:00:04 0:35:57 smithi master kcephfs/thrash/{clusters/fixed-3-cephfs.yaml conf.yaml objectstore/bluestore.yaml thrashers/mds.yaml workloads/kclient_workunit_suites_ffsb.yaml} 3
Failure Reason:

"2017-09-02 05:13:01.122206 mon.b mon.0 172.21.15.51:6789/0 186 : cluster [WRN] Health check failed: 1 filesystem is degraded (FS_DEGRADED)" in cluster log

fail 1583118 2017-08-31 05:20:41 2017-09-02 04:47:33 2017-09-02 05:19:33 0:32:00 0:09:46 0:22:14 smithi master kcephfs/recovery/{clusters/4-remote-clients.yaml debug/mds_client.yaml dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore/filestore-xfs.yaml tasks/mds-full.yaml} 4
Failure Reason:

Command failed on smithi182 with status 1: 'sudo cp /var/lib/ceph/osd/ceph-0/fsid /tmp/tmpEYJaec'

pass 1583119 2017-08-31 05:20:41 2017-09-02 04:47:44 2017-09-02 06:01:44 1:14:00 1:00:37 0:13:23 smithi master kcephfs/cephfs/{clusters/fixed-3-cephfs.yaml conf.yaml inline/no.yaml objectstore/bluestore.yaml tasks/kclient_workunit_suites_dbench.yaml} 3
pass 1583120 2017-08-31 05:20:42 2017-09-02 04:48:03 2017-09-02 05:34:02 0:45:59 0:30:14 0:15:45 smithi master kcephfs/cephfs/{clusters/fixed-3-cephfs.yaml conf.yaml inline/yes.yaml objectstore/filestore-xfs.yaml tasks/kclient_workunit_suites_ffsb.yaml} 3
fail 1583121 2017-08-31 05:20:43 2017-09-02 04:48:50 2017-09-02 08:30:57 3:42:07 0:13:00 3:29:07 smithi master kcephfs/recovery/{clusters/4-remote-clients.yaml debug/mds_client.yaml dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore/bluestore.yaml tasks/pool-perm.yaml} 4
Failure Reason:

"2017-09-02 08:21:33.033139 mon.a mon.0 172.21.15.82:6789/0 206 : cluster [WRN] Health check failed: 1 filesystem is degraded (FS_DEGRADED)" in cluster log

pass 1583122 2017-08-31 05:20:43 2017-09-02 04:48:49 2017-09-02 05:30:48 0:41:59 0:18:15 0:23:44 smithi master kcephfs/cephfs/{clusters/fixed-3-cephfs.yaml conf.yaml inline/no.yaml objectstore/bluestore.yaml tasks/kclient_workunit_suites_fsstress.yaml} 3
pass 1583123 2017-08-31 05:20:44 2017-09-02 04:49:16 2017-09-02 05:43:14 0:53:58 0:18:06 0:35:52 smithi master kcephfs/cephfs/{clusters/fixed-3-cephfs.yaml conf.yaml inline/yes.yaml objectstore/filestore-xfs.yaml tasks/kclient_workunit_suites_fsx.yaml} 3
fail 1583124 2017-08-31 05:20:45 2017-09-02 04:50:01 2017-09-02 06:58:02 2:08:01 0:19:09 1:48:52 smithi master kcephfs/recovery/{clusters/4-remote-clients.yaml debug/mds_client.yaml dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore/filestore-xfs.yaml tasks/sessionmap.yaml} 4
Failure Reason:

"2017-09-02 06:41:53.291321 mon.a mon.0 172.21.15.92:6789/0 215 : cluster [WRN] Health check failed: 1 filesystem is degraded (FS_DEGRADED)" in cluster log

fail 1583125 2017-08-31 05:20:45 2017-09-02 04:50:04 2017-09-02 05:32:01 0:41:57 0:25:58 0:15:59 smithi master kcephfs/thrash/{clusters/fixed-3-cephfs.yaml conf.yaml objectstore/filestore-xfs.yaml thrashers/mon.yaml workloads/kclient_workunit_suites_iozone.yaml} 3
Failure Reason:

"2017-09-02 05:07:01.633043 mon.b mon.0 172.21.15.63:6789/0 113 : cluster [WRN] Health check failed: 1/3 mons down, quorum b,a (MON_DOWN)" in cluster log

pass 1583126 2017-08-31 05:20:46 2017-09-02 04:50:43 2017-09-02 05:18:41 0:27:58 0:12:58 0:15:00 smithi master kcephfs/cephfs/{clusters/fixed-3-cephfs.yaml conf.yaml inline/no.yaml objectstore/bluestore.yaml tasks/kclient_workunit_suites_fsync.yaml} 3
fail 1583127 2017-08-31 05:20:47 2017-09-02 04:50:59 2017-09-02 06:00:58 1:09:59 0:22:15 0:47:44 smithi master kcephfs/recovery/{clusters/4-remote-clients.yaml debug/mds_client.yaml dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore/bluestore.yaml tasks/strays.yaml} 4
Failure Reason:

Test failure: test_files_throttle (tasks.cephfs.test_strays.TestStrays)

pass 1583128 2017-08-31 05:20:47 2017-09-02 04:53:57 2017-09-02 05:35:49 0:41:52 0:23:39 0:18:13 smithi master kcephfs/cephfs/{clusters/fixed-3-cephfs.yaml conf.yaml inline/yes.yaml objectstore/filestore-xfs.yaml tasks/kclient_workunit_suites_iozone.yaml} 3
pass 1583129 2017-08-31 05:20:48 2017-09-02 04:54:02 2017-09-02 07:12:04 2:18:02 2:09:44 0:08:18 smithi master kcephfs/cephfs/{clusters/fixed-3-cephfs.yaml conf.yaml inline/no.yaml objectstore/bluestore.yaml tasks/kclient_workunit_suites_pjd.yaml} 3
fail 1583130 2017-08-31 05:20:49 2017-09-02 04:54:13 2017-09-02 07:12:15 2:18:02 0:36:07 1:41:55 smithi master kcephfs/recovery/{clusters/4-remote-clients.yaml debug/mds_client.yaml dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore/filestore-xfs.yaml tasks/volume-client.yaml} 4
Failure Reason:

"2017-09-02 06:39:50.838902 mon.a mon.0 172.21.15.179:6789/0 206 : cluster [WRN] Health check failed: 1 filesystem is degraded (FS_DEGRADED)" in cluster log

pass 1583131 2017-08-31 05:20:49 2017-09-02 04:55:31 2017-09-02 05:35:31 0:40:00 0:14:56 0:25:04 smithi master kcephfs/cephfs/{clusters/fixed-3-cephfs.yaml conf.yaml inline/yes.yaml objectstore/filestore-xfs.yaml tasks/kclient_workunit_trivial_sync.yaml} 3
pass 1583132 2017-08-31 05:20:50 2017-09-02 04:56:33 2017-09-02 05:28:33 0:32:00 0:12:46 0:19:14 smithi master kcephfs/cephfs/{clusters/fixed-3-cephfs.yaml conf.yaml inline/yes.yaml objectstore/bluestore.yaml tasks/kclient_workunit_direct_io.yaml} 3
fail 1583133 2017-08-31 05:20:50 2017-09-02 04:58:18 2017-09-02 06:38:15 1:39:57 0:51:50 0:48:07 smithi master kcephfs/mixed-clients/{clusters/2-clients.yaml conf.yaml objectstore/filestore-xfs.yaml tasks/kernel_cfuse_workunits_dbench_iozone.yaml} 4
Failure Reason:

"2017-09-02 06:05:15.160059 mon.b mon.0 172.21.15.9:6789/0 171 : cluster [WRN] overall HEALTH_WARN 1/3 mons down, quorum b,c" in cluster log

fail 1583134 2017-08-31 05:20:51 2017-09-02 04:58:19 2017-09-02 09:12:24 4:14:05 0:18:38 3:55:27 smithi master kcephfs/recovery/{clusters/4-remote-clients.yaml debug/mds_client.yaml dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore/filestore-xfs.yaml tasks/auto-repair.yaml} 4
Failure Reason:

"2017-09-02 09:00:05.926144 mon.a mon.0 172.21.15.26:6789/0 197 : cluster [WRN] Health check failed: 1 filesystem is degraded (FS_DEGRADED)" in cluster log

fail 1583135 2017-08-31 05:20:52 2017-09-02 04:58:39 2017-09-02 05:44:38 0:45:59 0:33:25 0:12:34 smithi master kcephfs/thrash/{clusters/fixed-3-cephfs.yaml conf.yaml objectstore/filestore-xfs.yaml thrashers/default.yaml workloads/kclient_workunit_suites_ffsb.yaml} 3
Failure Reason:

"2017-09-02 05:15:56.746112 mon.a mon.0 172.21.15.1:6789/0 187 : cluster [WRN] Health check failed: noscrub flag(s) set (OSDMAP_FLAGS)" in cluster log

pass 1583136 2017-08-31 05:20:52 2017-09-02 04:59:35 2017-09-02 06:57:38 1:58:03 1:41:20 0:16:43 smithi master kcephfs/cephfs/{clusters/fixed-3-cephfs.yaml conf.yaml inline/no.yaml objectstore/filestore-xfs.yaml tasks/kclient_workunit_kernel_untar_build.yaml} 3
fail 1583137 2017-08-31 05:20:53 2017-09-02 05:00:16 2017-09-02 05:56:05 0:55:49 0:18:11 0:37:38 smithi master kcephfs/recovery/{clusters/4-remote-clients.yaml debug/mds_client.yaml dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore/bluestore.yaml tasks/backtrace.yaml} 4
Failure Reason:

"2017-09-02 05:43:41.845121 mon.a mon.0 172.21.15.6:6789/0 221 : cluster [WRN] Health check failed: 1 filesystem is degraded (FS_DEGRADED)" in cluster log

pass 1583138 2017-08-31 05:20:54 2017-09-02 05:00:08 2017-09-02 06:06:08 1:06:00 0:37:50 0:28:10 smithi master kcephfs/cephfs/{clusters/fixed-3-cephfs.yaml conf.yaml inline/yes.yaml objectstore/bluestore.yaml tasks/kclient_workunit_misc.yaml} 3
pass 1583139 2017-08-31 05:20:54 2017-09-02 05:00:22 2017-09-02 05:28:20 0:27:58 0:20:47 0:07:11 smithi master kcephfs/cephfs/{clusters/fixed-3-cephfs.yaml conf.yaml inline/no.yaml objectstore/filestore-xfs.yaml tasks/kclient_workunit_o_trunc.yaml} 3
fail 1583140 2017-08-31 05:20:55 2017-09-02 05:01:40 2017-09-02 05:51:40 0:50:00 0:16:20 0:33:40 smithi master kcephfs/recovery/{clusters/4-remote-clients.yaml debug/mds_client.yaml dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore/filestore-xfs.yaml tasks/client-limits.yaml} 4
Failure Reason:

"2017-09-02 05:37:26.925566 mon.a mon.0 172.21.15.53:6789/0 205 : cluster [WRN] Health check failed: 1 filesystem is degraded (FS_DEGRADED)" in cluster log

fail 1583141 2017-08-31 05:20:56 2017-09-02 05:01:55 2017-09-02 09:29:59 4:28:04 4:03:47 0:24:17 smithi master kcephfs/cephfs/{clusters/fixed-3-cephfs.yaml conf.yaml inline/yes.yaml objectstore/bluestore.yaml tasks/kclient_workunit_snaps.yaml} 3
Failure Reason:

Command failed (workunit test fs/snaps/untar_snap_rm.sh) on smithi071 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=luminous TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/fs/snaps/untar_snap_rm.sh'

fail 1583142 2017-08-31 05:20:56 2017-09-02 05:04:20 2017-09-02 05:34:17 0:29:57 0:18:49 0:11:08 smithi master kcephfs/thrash/{clusters/fixed-3-cephfs.yaml conf.yaml objectstore/bluestore.yaml thrashers/mds.yaml workloads/kclient_workunit_suites_iozone.yaml} 3
Failure Reason:

"2017-09-02 05:23:21.311889 mon.a mon.0 172.21.15.38:6789/0 189 : cluster [WRN] Health check failed: 1 filesystem is degraded (FS_DEGRADED)" in cluster log

dead 1583143 2017-08-31 05:20:57 2017-09-02 05:05:32 2017-09-02 17:13:03 12:07:31 smithi master kcephfs/recovery/{clusters/4-remote-clients.yaml debug/mds_client.yaml dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore/bluestore.yaml tasks/client-recovery.yaml} 4
fail 1583144 2017-08-31 05:20:58 2017-09-02 05:05:39 2017-09-02 06:07:39 1:02:00 0:47:11 0:14:49 smithi master kcephfs/cephfs/{clusters/fixed-3-cephfs.yaml conf.yaml inline/no.yaml objectstore/filestore-xfs.yaml tasks/kclient_workunit_suites_dbench.yaml} 3
Failure Reason:

"2017-09-02 05:48:22.606848 mon.b mon.1 172.21.15.131:6789/0 108 : cluster [WRN] Health check failed: 1/3 mons down, quorum b,c (MON_DOWN)" in cluster log

fail 1583145 2017-08-31 05:20:58 2017-09-02 05:06:04 2017-09-02 07:00:00 1:53:56 1:00:33 0:53:23 smithi master kcephfs/cephfs/{clusters/fixed-3-cephfs.yaml conf.yaml inline/yes.yaml objectstore/bluestore.yaml tasks/kclient_workunit_suites_ffsb.yaml} 3
Failure Reason:

"2017-09-02 06:11:40.822839 mon.b mon.0 172.21.15.59:6789/0 161 : cluster [WRN] Health check failed: 1/3 mons down, quorum b,c (MON_DOWN)" in cluster log

fail 1583146 2017-08-31 05:20:59 2017-09-02 05:06:50 2017-09-02 07:48:51 2:42:01 0:14:11 2:27:50 smithi master kcephfs/recovery/{clusters/4-remote-clients.yaml debug/mds_client.yaml dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore/filestore-xfs.yaml tasks/config-commands.yaml} 4
Failure Reason:

"2017-09-02 07:37:18.005081 mon.a mon.0 172.21.15.28:6789/0 198 : cluster [WRN] Health check failed: 1 filesystem is degraded (FS_DEGRADED)" in cluster log

pass 1583147 2017-08-31 05:20:59 2017-09-02 05:08:03 2017-09-02 05:24:02 0:15:59 0:10:20 0:05:39 smithi master kcephfs/cephfs/{clusters/fixed-3-cephfs.yaml conf.yaml inline/no.yaml objectstore/filestore-xfs.yaml tasks/kclient_workunit_suites_fsstress.yaml} 3
pass 1583148 2017-08-31 05:21:00 2017-09-02 05:10:29 2017-09-02 05:34:25 0:23:56 0:22:29 0:01:27 smithi master kcephfs/cephfs/{clusters/fixed-3-cephfs.yaml conf.yaml inline/yes.yaml objectstore/bluestore.yaml tasks/kclient_workunit_suites_fsx.yaml} 3
fail 1583149 2017-08-31 05:21:01 2017-09-02 05:10:32 2017-09-02 06:10:25 0:59:53 0:26:00 0:33:53 smithi master kcephfs/recovery/{clusters/4-remote-clients.yaml debug/mds_client.yaml dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore/bluestore.yaml tasks/damage.yaml} 4
Failure Reason:

"2017-09-02 05:48:52.490952 mon.a mon.0 172.21.15.203:6789/0 395 : cluster [ERR] Health check failed: 1 MDSs report damaged metadata (MDS_DAMAGE)" in cluster log

fail 1583150 2017-08-31 05:21:01 2017-09-02 05:11:55 2017-09-02 06:39:56 1:28:01 0:39:32 0:48:29 smithi master kcephfs/thrash/{clusters/fixed-3-cephfs.yaml conf.yaml objectstore/filestore-xfs.yaml thrashers/mon.yaml workloads/kclient_workunit_suites_ffsb.yaml} 3
Failure Reason:

"2017-09-02 06:02:14.937716 mon.a mon.0 172.21.15.37:6789/0 189 : cluster [WRN] Health check failed: 1/3 mons down, quorum a,c (MON_DOWN)" in cluster log

pass 1583151 2017-08-31 05:21:02 2017-09-02 05:12:05 2017-09-02 05:32:03 0:19:58 0:10:08 0:09:50 smithi master kcephfs/cephfs/{clusters/fixed-3-cephfs.yaml conf.yaml inline/no.yaml objectstore/filestore-xfs.yaml tasks/kclient_workunit_suites_fsync.yaml} 3
fail 1583152 2017-08-31 05:21:02 2017-09-02 05:12:22 2017-09-02 06:16:21 1:03:59 0:37:23 0:26:36 smithi master kcephfs/recovery/{clusters/4-remote-clients.yaml debug/mds_client.yaml dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore/filestore-xfs.yaml tasks/data-scan.yaml} 4
Failure Reason:

Test failure: test_rebuild_simple_altpool (tasks.cephfs.test_data_scan.TestDataScan)

pass 1583153 2017-08-31 05:21:03 2017-09-02 05:12:23 2017-09-02 05:34:22 0:21:59 0:14:39 0:07:20 smithi master kcephfs/cephfs/{clusters/fixed-3-cephfs.yaml conf.yaml inline/yes.yaml objectstore/bluestore.yaml tasks/kclient_workunit_suites_iozone.yaml} 3
pass 1583154 2017-08-31 05:21:04 2017-09-02 05:12:39 2017-09-02 06:38:40 1:26:01 1:16:15 0:09:46 smithi master kcephfs/cephfs/{clusters/fixed-3-cephfs.yaml conf.yaml inline/no.yaml objectstore/filestore-xfs.yaml tasks/kclient_workunit_suites_pjd.yaml} 3
fail 1583155 2017-08-31 05:21:04 2017-09-02 05:13:25 2017-09-02 06:03:24 0:49:59 0:27:46 0:22:13 smithi master kcephfs/recovery/{clusters/4-remote-clients.yaml debug/mds_client.yaml dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore/bluestore.yaml tasks/failover.yaml} 4
Failure Reason:

"2017-09-02 05:37:26.248671 mon.a mon.0 172.21.15.114:6789/0 215 : cluster [WRN] Health check failed: 1 filesystem is degraded (FS_DEGRADED)" in cluster log

pass 1583156 2017-08-31 05:21:05 2017-09-02 05:15:35 2017-09-02 05:33:34 0:17:59 0:10:48 0:07:11 smithi master kcephfs/cephfs/{clusters/fixed-3-cephfs.yaml conf.yaml inline/yes.yaml objectstore/bluestore.yaml tasks/kclient_workunit_trivial_sync.yaml} 3
pass 1583157 2017-08-31 05:21:06 2017-09-02 05:16:54 2017-09-02 05:40:46 0:23:52 0:10:46 0:13:06 smithi master kcephfs/cephfs/{clusters/fixed-3-cephfs.yaml conf.yaml inline/no.yaml objectstore/filestore-xfs.yaml tasks/kclient_workunit_direct_io.yaml} 3
fail 1583158 2017-08-31 05:21:06 2017-09-02 05:16:48 2017-09-02 09:10:51 3:54:03 2:37:08 1:16:55 smithi master kcephfs/mixed-clients/{clusters/2-clients.yaml conf.yaml objectstore/bluestore.yaml tasks/kernel_cfuse_workunits_untarbuild_blogbench.yaml} 4
Failure Reason:

"2017-09-02 06:40:14.819093 mon.b mon.0 172.21.15.1:6789/0 183 : cluster [WRN] overall HEALTH_WARN 1/3 mons down, quorum b,c" in cluster log

fail 1583159 2017-08-31 05:21:07 2017-09-02 05:16:49 2017-09-02 08:48:52 3:32:03 0:22:53 3:09:10 smithi master kcephfs/recovery/{clusters/4-remote-clients.yaml debug/mds_client.yaml dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore/filestore-xfs.yaml tasks/forward-scrub.yaml} 4
Failure Reason:

"2017-09-02 08:31:37.025133 mon.a mon.0 172.21.15.67:6789/0 448 : cluster [ERR] Health check failed: 1 MDSs report damaged metadata (MDS_DAMAGE)" in cluster log

fail 1583160 2017-08-31 05:21:08 2017-09-02 05:17:16 2017-09-02 05:55:13 0:37:57 0:19:29 0:18:28 smithi master kcephfs/thrash/{clusters/fixed-3-cephfs.yaml conf.yaml objectstore/bluestore.yaml thrashers/default.yaml workloads/kclient_workunit_suites_iozone.yaml} 3
Failure Reason:

"2017-09-02 05:40:06.160090 mon.b mon.0 172.21.15.132:6789/0 174 : cluster [WRN] Health check failed: noscrub flag(s) set (OSDMAP_FLAGS)" in cluster log

pass 1583161 2017-08-31 05:21:08 2017-09-02 05:18:32 2017-09-02 07:04:33 1:46:01 1:15:26 0:30:35 smithi master kcephfs/cephfs/{clusters/fixed-3-cephfs.yaml conf.yaml inline/yes.yaml objectstore/bluestore.yaml tasks/kclient_workunit_kernel_untar_build.yaml} 3
fail 1583162 2017-08-31 05:21:09 2017-09-02 05:18:42 2017-09-02 06:02:42 0:44:00 0:25:14 0:18:46 smithi master kcephfs/recovery/{clusters/4-remote-clients.yaml debug/mds_client.yaml dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore/bluestore.yaml tasks/journal-repair.yaml} 4
Failure Reason:

"2017-09-02 05:42:50.186033 mon.a mon.0 172.21.15.202:6789/0 405 : cluster [ERR] Health check failed: 1 MDSs report damaged metadata (MDS_DAMAGE)" in cluster log

fail 1583163 2017-08-31 05:21:09 2017-09-02 05:19:44 2017-09-02 06:29:44 1:10:00 0:51:39 0:18:21 smithi master kcephfs/cephfs/{clusters/fixed-3-cephfs.yaml conf.yaml inline/no.yaml objectstore/filestore-xfs.yaml tasks/kclient_workunit_misc.yaml} 3
Failure Reason:

"2017-09-02 05:44:54.506431 mon.a mon.0 172.21.15.21:6789/0 216 : cluster [WRN] daemon mds.a-s is not responding, replacing it as rank 0 with standby daemon mds.a" in cluster log

pass 1583164 2017-08-31 05:21:10 2017-09-02 05:21:36 2017-09-02 05:51:35 0:29:59 0:22:40 0:07:19 smithi master kcephfs/cephfs/{clusters/fixed-3-cephfs.yaml conf.yaml inline/yes.yaml objectstore/bluestore.yaml tasks/kclient_workunit_o_trunc.yaml} 3
fail 1583165 2017-08-31 05:21:11 2017-09-02 05:24:08 2017-09-02 07:58:06 2:33:58 0:12:42 2:21:16 smithi master kcephfs/recovery/{clusters/4-remote-clients.yaml debug/mds_client.yaml dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore/filestore-xfs.yaml tasks/mds-flush.yaml} 4
Failure Reason:

"2017-09-02 07:48:18.962230 mon.a mon.0 172.21.15.130:6789/0 221 : cluster [WRN] Health check failed: 1 filesystem is degraded (FS_DEGRADED)" in cluster log

fail 1583166 2017-08-31 05:21:11 2017-09-02 05:26:18 2017-09-02 09:54:23 4:28:05 3:58:39 0:29:26 smithi master kcephfs/cephfs/{clusters/fixed-3-cephfs.yaml conf.yaml inline/no.yaml objectstore/filestore-xfs.yaml tasks/kclient_workunit_snaps.yaml} 3
Failure Reason:

Command failed (workunit test fs/snaps/untar_snap_rm.sh) on smithi155 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=luminous TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/fs/snaps/untar_snap_rm.sh'

fail 1583167 2017-08-31 05:21:12 2017-09-02 05:27:21 2017-09-02 06:05:20 0:37:59 0:30:37 0:07:22 smithi master kcephfs/thrash/{clusters/fixed-3-cephfs.yaml conf.yaml objectstore/filestore-xfs.yaml thrashers/mds.yaml workloads/kclient_workunit_suites_ffsb.yaml} 3
Failure Reason:

"2017-09-02 05:39:24.153281 mon.b mon.0 172.21.15.95:6789/0 169 : cluster [WRN] Health check failed: 1 filesystem is degraded (FS_DEGRADED)" in cluster log

fail 1583168 2017-08-31 05:21:13 2017-09-02 05:28:32 2017-09-02 05:52:32 0:24:00 0:11:19 0:12:41 smithi master kcephfs/recovery/{clusters/4-remote-clients.yaml debug/mds_client.yaml dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore/bluestore.yaml tasks/mds-full.yaml} 4
Failure Reason:

Command failed on smithi156 with status 1: 'sudo cp /var/lib/ceph/osd/ceph-1/fsid /tmp/tmpboOgWP'

fail 1583169 2017-08-31 05:21:13 2017-09-02 05:28:36 2017-09-02 06:44:36 1:16:00 1:01:28 0:14:32 smithi master kcephfs/cephfs/{clusters/fixed-3-cephfs.yaml conf.yaml inline/yes.yaml objectstore/bluestore.yaml tasks/kclient_workunit_suites_dbench.yaml} 3
Failure Reason:

"2017-09-02 05:51:44.359796 mon.b mon.0 172.21.15.85:6789/0 243 : cluster [WRN] overall HEALTH_WARN 1/3 mons down, quorum b,c" in cluster log

fail 1583170 2017-08-31 05:21:14 2017-09-02 05:29:38 2017-09-02 06:15:28 0:45:50 0:31:03 0:14:47 smithi master kcephfs/cephfs/{clusters/fixed-3-cephfs.yaml conf.yaml inline/no.yaml objectstore/filestore-xfs.yaml tasks/kclient_workunit_suites_ffsb.yaml} 3
Failure Reason:

SELinux denials found on ubuntu@smithi003.front.sepia.ceph.com: ['type=AVC msg=audit(1504331484.463:4380): avc: denied { getattr } for pid=22751 comm="ceph-osd" path="/dev/nvme0n1p2" dev="devtmpfs" ino=58265 scontext=system_u:system_r:ceph_t:s0 tcontext=system_u:object_r:nvme_device_t:s0 tclass=blk_file permissive=1']

fail 1583171 2017-08-31 05:21:14 2017-09-02 05:32:21 2017-09-02 05:56:26 0:24:05 0:14:52 0:09:13 smithi master kcephfs/recovery/{clusters/4-remote-clients.yaml debug/mds_client.yaml dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore/filestore-xfs.yaml tasks/pool-perm.yaml} 4
Failure Reason:

"2017-09-02 05:44:16.799595 mon.a mon.0 172.21.15.82:6789/0 198 : cluster [WRN] Health check failed: 1 filesystem is degraded (FS_DEGRADED)" in cluster log

pass 1583172 2017-08-31 05:21:15 2017-09-02 05:31:59 2017-09-02 06:01:58 0:29:59 0:16:16 0:13:43 smithi master kcephfs/cephfs/{clusters/fixed-3-cephfs.yaml conf.yaml inline/yes.yaml objectstore/bluestore.yaml tasks/kclient_workunit_suites_fsstress.yaml} 3
pass 1583173 2017-08-31 05:21:16 2017-09-02 05:31:58 2017-09-02 06:05:58 0:34:00 0:21:30 0:12:30 smithi master kcephfs/cephfs/{clusters/fixed-3-cephfs.yaml conf.yaml inline/no.yaml objectstore/filestore-xfs.yaml tasks/kclient_workunit_suites_fsx.yaml} 3
fail 1583174 2017-08-31 05:21:16 2017-09-02 05:31:59 2017-09-02 09:28:02 3:56:03 0:25:59 3:30:04 smithi master kcephfs/recovery/{clusters/4-remote-clients.yaml debug/mds_client.yaml dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore/bluestore.yaml tasks/sessionmap.yaml} 4
Failure Reason:

"2017-09-02 09:09:09.061145 mon.a mon.0 172.21.15.12:6789/0 218 : cluster [WRN] Health check failed: 1 filesystem is degraded (FS_DEGRADED)" in cluster log

fail 1583175 2017-08-31 05:21:17 2017-09-02 05:31:58 2017-09-02 06:01:58 0:30:00 0:16:01 0:13:59 smithi master kcephfs/thrash/{clusters/fixed-3-cephfs.yaml conf.yaml objectstore/bluestore.yaml thrashers/mon.yaml workloads/kclient_workunit_suites_iozone.yaml} 3
Failure Reason:

"2017-09-02 05:53:08.978933 mon.a mon.1 172.21.15.138:6789/0 49 : cluster [WRN] Health check failed: 1/3 mons down, quorum a,c (MON_DOWN)" in cluster log

pass 1583176 2017-08-31 05:21:17 2017-09-02 05:32:06 2017-09-02 05:48:02 0:15:56 0:12:19 0:03:37 smithi master kcephfs/cephfs/{clusters/fixed-3-cephfs.yaml conf.yaml inline/yes.yaml objectstore/bluestore.yaml tasks/kclient_workunit_suites_fsync.yaml} 3
fail 1583177 2017-08-31 05:21:18 2017-09-02 05:32:05 2017-09-02 06:16:04 0:43:59 0:37:52 0:06:07 smithi master kcephfs/recovery/{clusters/4-remote-clients.yaml debug/mds_client.yaml dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore/filestore-xfs.yaml tasks/strays.yaml} 4
Failure Reason:

Test failure: test_ops_throttle (tasks.cephfs.test_strays.TestStrays)

pass 1583178 2017-08-31 05:21:19 2017-09-02 05:32:16 2017-09-02 05:58:15 0:25:59 0:24:45 0:01:14 smithi master kcephfs/cephfs/{clusters/fixed-3-cephfs.yaml conf.yaml inline/no.yaml objectstore/filestore-xfs.yaml tasks/kclient_workunit_suites_iozone.yaml} 3
pass 1583179 2017-08-31 05:21:19 2017-09-02 05:33:33 2017-09-02 07:39:34 2:06:01 1:46:37 0:19:24 smithi master kcephfs/cephfs/{clusters/fixed-3-cephfs.yaml conf.yaml inline/yes.yaml objectstore/bluestore.yaml tasks/kclient_workunit_suites_pjd.yaml} 3
fail 1583180 2017-08-31 05:21:20 2017-09-02 05:33:37 2017-09-02 06:29:36 0:55:59 0:03:23 0:52:36 smithi master kcephfs/recovery/{clusters/4-remote-clients.yaml debug/mds_client.yaml dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore/bluestore.yaml tasks/volume-client.yaml} 4
Failure Reason:

Command failed on smithi079 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=12.2.0-22-gfa99676-1trusty ceph-mds=12.2.0-22-gfa99676-1trusty ceph-mgr=12.2.0-22-gfa99676-1trusty ceph-common=12.2.0-22-gfa99676-1trusty ceph-fuse=12.2.0-22-gfa99676-1trusty ceph-test=12.2.0-22-gfa99676-1trusty radosgw=12.2.0-22-gfa99676-1trusty python-ceph=12.2.0-22-gfa99676-1trusty libcephfs2=12.2.0-22-gfa99676-1trusty libcephfs-dev=12.2.0-22-gfa99676-1trusty libcephfs-java=12.2.0-22-gfa99676-1trusty libcephfs-jni=12.2.0-22-gfa99676-1trusty librados2=12.2.0-22-gfa99676-1trusty librbd1=12.2.0-22-gfa99676-1trusty rbd-fuse=12.2.0-22-gfa99676-1trusty'

pass 1583181 2017-08-31 05:21:21 2017-09-02 05:34:05 2017-09-02 06:00:03 0:25:58 0:11:43 0:14:15 smithi master kcephfs/cephfs/{clusters/fixed-3-cephfs.yaml conf.yaml inline/no.yaml objectstore/filestore-xfs.yaml tasks/kclient_workunit_trivial_sync.yaml} 3