Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
pass 1659042 2017-09-22 19:35:08 2017-09-22 19:49:35 2017-09-22 20:05:35 0:16:00 0:10:43 0:05:17 smithi master kcephfs/recovery/{clusters/4-remote-clients.yaml debug/mds_client.yaml dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore/bluestore.yaml tasks/auto-repair.yaml whitelist_health.yaml} 4
fail 1659043 2017-09-22 19:35:09 2017-09-22 19:49:36 2017-09-22 20:17:35 0:27:59 0:16:03 0:11:56 smithi master kcephfs/recovery/{clusters/4-remote-clients.yaml debug/mds_client.yaml dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore/bluestore.yaml tasks/client-limits.yaml whitelist_health.yaml} 4
Failure Reason:

"2017-09-22 20:06:00.930241 mon.a mon.0 172.21.15.105:6789/0 642 : cluster [WRN] MDS health message (mds.0): MDS cache is too large (521kB/1GB); 202 inodes in use by clients, 0 stray files" in cluster log

fail 1659044 2017-09-22 19:35:10 2017-09-22 19:49:36 2017-09-22 20:43:36 0:54:00 0:40:15 0:13:45 smithi master kcephfs/recovery/{clusters/4-remote-clients.yaml debug/mds_client.yaml dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore/filestore-xfs.yaml tasks/client-recovery.yaml whitelist_health.yaml} 4
Failure Reason:

"2017-09-22 20:12:57.631135 mds.b mds.0 172.21.15.29:6810/1407294005 1 : cluster [WRN] evicting unresponsive client smithi088: (4324), after waiting 45 seconds during MDS startup" in cluster log

dead 1659045 2017-09-22 19:35:10 2017-09-22 19:49:46 2017-09-22 20:45:47 0:56:01 0:38:29 0:17:32 smithi master kcephfs/recovery/{clusters/4-remote-clients.yaml debug/mds_client.yaml dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore/bluestore.yaml tasks/data-scan.yaml whitelist_health.yaml} 4
Failure Reason:

Test failure: test_rebuild_nondefault_layout (tasks.cephfs.test_data_scan.TestDataScan), test_rebuild_nondefault_layout (tasks.cephfs.test_data_scan.TestDataScan)

fail 1659046 2017-09-22 19:35:11 2017-09-22 19:51:33 2017-09-22 20:25:33 0:34:00 0:23:35 0:10:25 smithi master kcephfs/recovery/{clusters/4-remote-clients.yaml debug/mds_client.yaml dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore/filestore-xfs.yaml tasks/failover.yaml whitelist_health.yaml} 4
Failure Reason:

"2017-09-22 20:07:40.456020 mon.a mon.0 172.21.15.149:6789/0 456 : cluster [WRN] daemon mds.c is not responding, replacing it as rank 0 with standby daemon mds.a" in cluster log

pass 1659047 2017-09-22 19:35:12 2017-09-22 19:51:33 2017-09-22 20:23:33 0:32:00 0:25:55 0:06:05 smithi master kcephfs/cephfs/{clusters/fixed-3-cephfs.yaml conf.yaml inline/no.yaml objectstore/bluestore.yaml tasks/kclient_workunit_kernel_untar_build.yaml} 3
fail 1659048 2017-09-22 19:35:13 2017-09-22 19:51:33 2017-09-22 20:09:33 0:18:00 0:07:02 0:10:58 smithi master kcephfs/recovery/{clusters/4-remote-clients.yaml debug/mds_client.yaml dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore/filestore-xfs.yaml tasks/mds-full.yaml whitelist_health.yaml} 4
Failure Reason:

Command failed on smithi182 with status 1: 'sudo cp /var/lib/ceph/osd/ceph-1/fsid /tmp/tmpGhj2zd'

pass 1659049 2017-09-22 19:35:13 2017-09-22 19:51:33 2017-09-22 20:29:33 0:38:00 0:27:20 0:10:40 smithi master kcephfs/recovery/{clusters/4-remote-clients.yaml debug/mds_client.yaml dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore/bluestore.yaml tasks/strays.yaml whitelist_health.yaml} 4
pass 1659050 2017-09-22 19:35:14 2017-09-22 19:51:35 2017-09-22 20:47:35 0:56:00 0:09:52 0:46:08 smithi master kcephfs/recovery/{clusters/4-remote-clients.yaml debug/mds_client.yaml dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore/filestore-xfs.yaml tasks/auto-repair.yaml whitelist_health.yaml} 4
fail 1659051 2017-09-22 19:35:15 2017-09-22 19:53:38 2017-09-22 20:23:38 0:30:00 0:16:28 0:13:32 smithi master kcephfs/recovery/{clusters/4-remote-clients.yaml debug/mds_client.yaml dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore/filestore-xfs.yaml tasks/client-limits.yaml whitelist_health.yaml} 4
Failure Reason:

"2017-09-22 20:12:40.368931 mon.a mon.0 172.21.15.64:6789/0 657 : cluster [WRN] MDS health message (mds.0): MDS cache is too large (496kB/1GB); 202 inodes in use by clients, 0 stray files" in cluster log

fail 1659052 2017-09-22 19:35:18 2017-09-22 19:53:39 2017-09-22 20:39:39 0:46:00 0:37:50 0:08:10 smithi master kcephfs/recovery/{clusters/4-remote-clients.yaml debug/mds_client.yaml dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore/bluestore.yaml tasks/client-recovery.yaml whitelist_health.yaml} 4
Failure Reason:

"2017-09-22 20:11:45.042819 mds.d mds.0 172.21.15.47:6810/2224118575 1 : cluster [WRN] evicting unresponsive client smithi204: (4324), after waiting 45 seconds during MDS startup" in cluster log

pass 1659053 2017-09-22 19:35:18 2017-09-22 19:53:38 2017-09-22 20:21:38 0:28:00 0:21:14 0:06:46 smithi master kcephfs/cephfs/{clusters/fixed-3-cephfs.yaml conf.yaml inline/yes.yaml objectstore/bluestore.yaml tasks/kclient_workunit_suites_ffsb.yaml} 3
fail 1659054 2017-09-22 19:35:19 2017-09-22 19:53:39 2017-09-22 20:31:39 0:38:00 0:26:02 0:11:58 smithi master kcephfs/recovery/{clusters/4-remote-clients.yaml debug/mds_client.yaml dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore/filestore-xfs.yaml tasks/data-scan.yaml whitelist_health.yaml} 4
Failure Reason:

Test failure: test_rebuild_simple_altpool (tasks.cephfs.test_data_scan.TestDataScan)

fail 1659055 2017-09-22 19:35:20 2017-09-22 19:53:38 2017-09-22 20:21:38 0:28:00 0:19:28 0:08:32 smithi master kcephfs/recovery/{clusters/4-remote-clients.yaml debug/mds_client.yaml dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore/bluestore.yaml tasks/failover.yaml whitelist_health.yaml} 4
Failure Reason:

"2017-09-22 20:04:29.354328 mon.a mon.0 172.21.15.197:6789/0 467 : cluster [WRN] daemon mds.d is not responding, replacing it as rank 0 with standby daemon mds.b" in cluster log

pass 1659056 2017-09-22 19:35:21 2017-09-22 19:53:38 2017-09-22 20:33:39 0:40:01 0:29:15 0:10:46 smithi master kcephfs/cephfs/{clusters/fixed-3-cephfs.yaml conf.yaml inline/no.yaml objectstore/filestore-xfs.yaml tasks/kclient_workunit_misc.yaml} 3
fail 1659057 2017-09-22 19:35:22 2017-09-22 19:55:28 2017-09-22 20:17:28 0:22:00 0:09:14 0:12:46 smithi master kcephfs/recovery/{clusters/4-remote-clients.yaml debug/mds_client.yaml dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore/bluestore.yaml tasks/mds-full.yaml whitelist_health.yaml} 4
Failure Reason:

Command failed on smithi114 with status 1: 'sudo cp /var/lib/ceph/osd/ceph-0/fsid /tmp/tmpy6vTAu'

pass 1659058 2017-09-22 19:35:22 2017-09-22 19:55:29 2017-09-22 20:45:30 0:50:01 0:39:08 0:10:53 smithi master kcephfs/cephfs/{clusters/fixed-3-cephfs.yaml conf.yaml inline/yes.yaml objectstore/bluestore.yaml tasks/kclient_workunit_suites_dbench.yaml} 3
pass 1659059 2017-09-22 19:35:23 2017-09-22 19:55:29 2017-09-22 21:07:30 1:12:01 0:40:07 0:31:54 smithi master kcephfs/recovery/{clusters/4-remote-clients.yaml debug/mds_client.yaml dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore/filestore-xfs.yaml tasks/strays.yaml whitelist_health.yaml} 4