Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
fail 1641244 2017-09-16 21:36:27 2017-09-16 22:38:36 2017-09-16 22:56:36 0:18:00 0:09:56 0:08:04 smithi master kcephfs/recovery/{clusters/4-remote-clients.yaml debug/mds_client.yaml dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore/bluestore.yaml tasks/auto-repair.yaml whitelist_health.yaml} 4
Failure Reason:

"2017-09-16 22:49:36.901032 mon.a mon.0 172.21.15.158:6789/0 517 : cluster [WRN] Health check failed: 1 MDSs are read only (MDS_READ_ONLY)" in cluster log

fail 1641245 2017-09-16 21:36:27 2017-09-16 22:38:37 2017-09-16 23:00:37 0:22:00 0:14:18 0:07:42 smithi master kcephfs/recovery/{clusters/4-remote-clients.yaml debug/mds_client.yaml dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore/bluestore.yaml tasks/client-limits.yaml whitelist_health.yaml} 4
Failure Reason:

"2017-09-16 22:50:30.597410 mon.a mon.0 172.21.15.188:6789/0 619 : cluster [WRN] MDS health message (mds.0): MDS cache is too large (521kB/1GB); 202 inodes in use by clients, 0 stray files" in cluster log

dead 1641246 2017-09-16 21:36:28 2017-09-16 22:38:46 2017-09-17 10:45:23 12:06:37 smithi master kcephfs/recovery/{clusters/4-remote-clients.yaml debug/mds_client.yaml dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore/filestore-xfs.yaml tasks/client-recovery.yaml whitelist_health.yaml} 4
fail 1641247 2017-09-16 21:36:29 2017-09-16 22:38:56 2017-09-16 23:10:56 0:32:00 0:22:27 0:09:33 smithi master kcephfs/recovery/{clusters/4-remote-clients.yaml debug/mds_client.yaml dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore/bluestore.yaml tasks/data-scan.yaml whitelist_health.yaml} 4
Failure Reason:

Test failure: test_rebuild_simple_altpool (tasks.cephfs.test_data_scan.TestDataScan)

fail 1641248 2017-09-16 21:36:29 2017-09-16 22:39:09 2017-09-16 23:07:08 0:27:59 0:20:31 0:07:28 smithi master kcephfs/recovery/{clusters/4-remote-clients.yaml debug/mds_client.yaml dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore/filestore-xfs.yaml tasks/failover.yaml whitelist_health.yaml} 4
Failure Reason:

"2017-09-16 22:50:23.730379 mon.a mon.0 172.21.15.169:6789/0 460 : cluster [WRN] daemon mds.b is not responding, replacing it as rank 0 with standby daemon mds.d" in cluster log

pass 1641249 2017-09-16 21:36:30 2017-09-16 22:42:36 2017-09-16 23:10:36 0:28:00 0:22:47 0:05:13 smithi master kcephfs/cephfs/{clusters/fixed-3-cephfs.yaml conf.yaml inline/no.yaml objectstore/bluestore.yaml tasks/kclient_workunit_kernel_untar_build.yaml} 3
fail 1641250 2017-09-16 21:36:30 2017-09-16 22:42:50 2017-09-16 22:56:49 0:13:59 0:06:46 0:07:13 smithi master kcephfs/recovery/{clusters/4-remote-clients.yaml debug/mds_client.yaml dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore/filestore-xfs.yaml tasks/mds-full.yaml whitelist_health.yaml} 4
Failure Reason:

Command failed on smithi073 with status 1: 'sudo cp /var/lib/ceph/osd/ceph-0/fsid /tmp/tmp75M0gm'

fail 1641251 2017-09-16 21:36:31 2017-09-16 22:45:02 2017-09-16 23:11:02 0:26:00 0:18:13 0:07:47 smithi master kcephfs/recovery/{clusters/4-remote-clients.yaml debug/mds_client.yaml dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore/bluestore.yaml tasks/strays.yaml whitelist_health.yaml} 4
Failure Reason:

Test failure: test_migration_on_shutdown (tasks.cephfs.test_strays.TestStrays)

fail 1641252 2017-09-16 21:36:32 2017-09-16 22:45:19 2017-09-16 23:03:19 0:18:00 0:10:01 0:07:59 smithi master kcephfs/recovery/{clusters/4-remote-clients.yaml debug/mds_client.yaml dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore/filestore-xfs.yaml tasks/auto-repair.yaml whitelist_health.yaml} 4
Failure Reason:

"2017-09-16 22:56:24.070876 mon.a mon.0 172.21.15.56:6789/0 511 : cluster [WRN] Health check failed: 1 MDSs are read only (MDS_READ_ONLY)" in cluster log

fail 1641253 2017-09-16 21:36:32 2017-09-16 22:48:45 2017-09-16 23:10:44 0:21:59 0:13:51 0:08:08 smithi master kcephfs/recovery/{clusters/4-remote-clients.yaml debug/mds_client.yaml dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore/filestore-xfs.yaml tasks/client-limits.yaml whitelist_health.yaml} 4
Failure Reason:

"2017-09-16 22:59:42.852012 mon.a mon.0 172.21.15.95:6789/0 643 : cluster [WRN] MDS health message (mds.0): MDS cache is too large (521kB/1GB); 202 inodes in use by clients, 0 stray files" in cluster log

fail 1641254 2017-09-16 21:36:33 2017-09-16 22:49:56 2017-09-17 00:07:57 1:18:01 1:08:21 0:09:40 smithi master kcephfs/recovery/{clusters/4-remote-clients.yaml debug/mds_client.yaml dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore/bluestore.yaml tasks/client-recovery.yaml whitelist_health.yaml} 4
Failure Reason:

"2017-09-16 23:09:38.811653 mds.d mds.0 172.21.15.84:6810/1429441865 1 : cluster [WRN] evicting unresponsive client smithi156: (4323), after waiting 45 seconds during MDS startup" in cluster log

pass 1641255 2017-09-16 21:36:33 2017-09-16 22:50:09 2017-09-16 23:18:09 0:28:00 0:22:13 0:05:47 smithi master kcephfs/cephfs/{clusters/fixed-3-cephfs.yaml conf.yaml inline/yes.yaml objectstore/bluestore.yaml tasks/kclient_workunit_suites_ffsb.yaml} 3
fail 1641256 2017-09-16 21:36:34 2017-09-16 22:50:09 2017-09-16 23:20:09 0:30:00 0:21:53 0:08:07 smithi master kcephfs/recovery/{clusters/4-remote-clients.yaml debug/mds_client.yaml dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore/filestore-xfs.yaml tasks/data-scan.yaml whitelist_health.yaml} 4
Failure Reason:

Test failure: test_rebuild_simple_altpool (tasks.cephfs.test_data_scan.TestDataScan)

fail 1641257 2017-09-16 21:36:35 2017-09-16 22:50:25 2017-09-16 23:18:24 0:27:59 0:20:12 0:07:47 smithi master kcephfs/recovery/{clusters/4-remote-clients.yaml debug/mds_client.yaml dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore/bluestore.yaml tasks/failover.yaml whitelist_health.yaml} 4
Failure Reason:

"2017-09-16 23:01:45.688492 mon.a mon.0 172.21.15.100:6789/0 465 : cluster [WRN] daemon mds.b is not responding, replacing it as rank 0 with standby daemon mds.d" in cluster log

pass 1641258 2017-09-16 21:36:35 2017-09-16 22:50:37 2017-09-16 23:18:36 0:27:59 0:24:28 0:03:31 smithi master kcephfs/cephfs/{clusters/fixed-3-cephfs.yaml conf.yaml inline/no.yaml objectstore/filestore-xfs.yaml tasks/kclient_workunit_misc.yaml} 3
fail 1641259 2017-09-16 21:36:36 2017-09-16 22:51:24 2017-09-16 23:07:23 0:15:59 0:06:37 0:09:22 smithi master kcephfs/recovery/{clusters/4-remote-clients.yaml debug/mds_client.yaml dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore/bluestore.yaml tasks/mds-full.yaml whitelist_health.yaml} 4
Failure Reason:

Command failed on smithi099 with status 1: 'sudo cp /var/lib/ceph/osd/ceph-0/fsid /tmp/tmpKdEKSW'

pass 1641260 2017-09-16 21:36:36 2017-09-16 22:52:51 2017-09-16 23:32:51 0:40:00 0:36:54 0:03:06 smithi master kcephfs/cephfs/{clusters/fixed-3-cephfs.yaml conf.yaml inline/yes.yaml objectstore/bluestore.yaml tasks/kclient_workunit_suites_dbench.yaml} 3
fail 1641261 2017-09-16 21:36:37 2017-09-16 22:54:53 2017-09-16 23:36:53 0:42:00 0:33:54 0:08:06 smithi master kcephfs/recovery/{clusters/4-remote-clients.yaml debug/mds_client.yaml dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore/filestore-xfs.yaml tasks/strays.yaml whitelist_health.yaml} 4
Failure Reason:

Test failure: test_replicated_delete_speed (tasks.cephfs.test_strays.TestStrays)