Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
pass 1270563 2017-06-08 05:20:31 2017-06-08 17:47:59 2017-06-08 18:42:00 0:54:01 0:42:04 0:11:57 ovh master kcephfs/cephfs/{clusters/fixed-3-cephfs.yaml conf.yaml fs/xfs.yaml inline/no.yaml tasks/kclient_workunit_direct_io.yaml} 3
pass 1270567 2017-06-08 05:20:32 2017-06-08 17:49:38 2017-06-08 23:13:45 5:24:07 2:05:19 3:18:48 ovh master kcephfs/mixed-clients/{clusters/2-clients.yaml conf.yaml fs/xfs.yaml tasks/kernel_cfuse_workunits_dbench_iozone.yaml} 4
pass 1270572 2017-06-08 05:20:33 2017-06-08 17:50:05 2017-06-08 20:32:08 2:42:03 0:37:58 2:04:05 ovh master kcephfs/recovery/{clusters/4-remote-clients.yaml debug/mds_client.yaml dirfrag/frag_enable.yaml mounts/kmounts.yaml tasks/auto-repair.yaml xfs.yaml} 4
fail 1270575 2017-06-08 05:20:34 2017-06-08 17:52:04 2017-06-08 20:12:07 2:20:03 1:46:41 0:33:22 ovh master kcephfs/thrash/{clusters/fixed-3-cephfs.yaml conf.yaml fs/xfs.yaml thrashers/default.yaml workloads/kclient_workunit_suites_ffsb.yaml} 3
Failure Reason:

Command failed (workunit test suites/ffsb.sh) on ovh016 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=kraken TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/ffsb.sh'

pass 1270578 2017-06-08 05:20:34 2017-06-08 17:59:26 2017-06-08 19:47:28 1:48:02 1:23:45 0:24:17 ovh master kcephfs/cephfs/{clusters/fixed-3-cephfs.yaml conf.yaml fs/xfs.yaml inline/yes.yaml tasks/kclient_workunit_kernel_untar_build.yaml} 3
pass 1270582 2017-06-08 05:20:35 2017-06-08 18:02:22 2017-06-08 19:52:24 1:50:02 1:26:15 0:23:47 ovh master kcephfs/cephfs/{clusters/fixed-3-cephfs.yaml conf.yaml fs/xfs.yaml inline/no.yaml tasks/kclient_workunit_misc.yaml} 3
fail 1270587 2017-06-08 05:20:36 2017-06-08 18:04:38 2017-06-08 22:08:43 4:04:05 0:18:16 3:45:49 ovh master kcephfs/recovery/{clusters/4-remote-clients.yaml debug/mds_client.yaml dirfrag/frag_enable.yaml mounts/kmounts.yaml tasks/backtrace.yaml xfs.yaml} 4
Failure Reason:

{'ovh084.front.sepia.ceph.com': {'msg': 'One or more items failed', 'failed': True, 'changed': False}}

fail 1270590 2017-06-08 05:20:36 2017-06-08 18:08:09 2017-06-08 19:08:11 1:00:02 0:34:19 0:25:43 ovh master kcephfs/cephfs/{clusters/fixed-3-cephfs.yaml conf.yaml fs/xfs.yaml inline/yes.yaml tasks/kclient_workunit_o_trunc.yaml} 3
Failure Reason:

Command failed on ovh039 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=11.2.0-291-gae0eab5-1xenial ceph-mds=11.2.0-291-gae0eab5-1xenial ceph-mgr=11.2.0-291-gae0eab5-1xenial ceph-common=11.2.0-291-gae0eab5-1xenial ceph-fuse=11.2.0-291-gae0eab5-1xenial ceph-test=11.2.0-291-gae0eab5-1xenial radosgw=11.2.0-291-gae0eab5-1xenial python-ceph=11.2.0-291-gae0eab5-1xenial libcephfs2=11.2.0-291-gae0eab5-1xenial libcephfs-dev=11.2.0-291-gae0eab5-1xenial libcephfs-java=11.2.0-291-gae0eab5-1xenial libcephfs-jni=11.2.0-291-gae0eab5-1xenial librados2=11.2.0-291-gae0eab5-1xenial librbd1=11.2.0-291-gae0eab5-1xenial rbd-fuse=11.2.0-291-gae0eab5-1xenial'

pass 1270594 2017-06-08 05:20:37 2017-06-08 18:13:31 2017-06-08 20:53:35 2:40:04 2:26:32 0:13:32 ovh master kcephfs/cephfs/{clusters/fixed-3-cephfs.yaml conf.yaml fs/xfs.yaml inline/no.yaml tasks/kclient_workunit_snaps.yaml} 3
pass 1270597 2017-06-08 05:20:37 2017-06-08 18:14:21 2017-06-08 22:56:27 4:42:06 0:41:48 4:00:18 ovh master kcephfs/recovery/{clusters/4-remote-clients.yaml debug/mds_client.yaml dirfrag/frag_enable.yaml mounts/kmounts.yaml tasks/client-limits.yaml xfs.yaml} 4
fail 1270601 2017-06-08 05:20:38 2017-06-08 18:18:46 2017-06-08 19:10:46 0:52:00 0:29:52 0:22:08 ovh master kcephfs/thrash/{clusters/fixed-3-cephfs.yaml conf.yaml fs/xfs.yaml thrashers/mds.yaml workloads/kclient_workunit_suites_iozone.yaml} 3
Failure Reason:

Command failed on ovh080 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=11.2.0-291-gae0eab5-1xenial ceph-mds=11.2.0-291-gae0eab5-1xenial ceph-mgr=11.2.0-291-gae0eab5-1xenial ceph-common=11.2.0-291-gae0eab5-1xenial ceph-fuse=11.2.0-291-gae0eab5-1xenial ceph-test=11.2.0-291-gae0eab5-1xenial radosgw=11.2.0-291-gae0eab5-1xenial python-ceph=11.2.0-291-gae0eab5-1xenial libcephfs2=11.2.0-291-gae0eab5-1xenial libcephfs-dev=11.2.0-291-gae0eab5-1xenial libcephfs-java=11.2.0-291-gae0eab5-1xenial libcephfs-jni=11.2.0-291-gae0eab5-1xenial librados2=11.2.0-291-gae0eab5-1xenial librbd1=11.2.0-291-gae0eab5-1xenial rbd-fuse=11.2.0-291-gae0eab5-1xenial'

pass 1270605 2017-06-08 05:20:39 2017-06-08 18:30:36 2017-06-08 20:00:32 1:29:56 1:08:32 0:21:24 ovh master kcephfs/cephfs/{clusters/fixed-3-cephfs.yaml conf.yaml fs/xfs.yaml inline/yes.yaml tasks/kclient_workunit_suites_dbench.yaml} 3
fail 1270608 2017-06-08 05:20:39 2017-06-08 18:30:41 2017-06-08 19:58:38 1:27:57 1:15:05 0:12:52 ovh master kcephfs/cephfs/{clusters/fixed-3-cephfs.yaml conf.yaml fs/xfs.yaml inline/no.yaml tasks/kclient_workunit_suites_ffsb.yaml} 3
Failure Reason:

"2017-06-08 19:42:57.567052 osd.2 158.69.65.204:6800/8205 1828 : cluster [WRN] OSD near full (90%)" in cluster log

dead 1270610 2017-06-08 05:20:40 2017-06-08 18:34:00 2017-06-09 06:36:35 12:02:35 ovh master kcephfs/recovery/{clusters/4-remote-clients.yaml debug/mds_client.yaml dirfrag/frag_enable.yaml mounts/kmounts.yaml tasks/client-recovery.yaml xfs.yaml} 4
pass 1270612 2017-06-08 05:20:41 2017-06-08 18:34:29 2017-06-08 19:38:29 1:04:00 0:53:46 0:10:14 ovh master kcephfs/cephfs/{clusters/fixed-3-cephfs.yaml conf.yaml fs/xfs.yaml inline/yes.yaml tasks/kclient_workunit_suites_fsstress.yaml} 3
pass 1270614 2017-06-08 05:20:41 2017-06-08 18:36:42 2017-06-08 19:48:42 1:12:00 0:51:39 0:20:21 ovh master kcephfs/cephfs/{clusters/fixed-3-cephfs.yaml conf.yaml fs/xfs.yaml inline/no.yaml tasks/kclient_workunit_suites_fsx.yaml} 3
pass 1270616 2017-06-08 05:20:42 2017-06-08 18:42:06 2017-06-08 22:20:10 3:38:04 0:23:36 3:14:28 ovh master kcephfs/recovery/{clusters/4-remote-clients.yaml debug/mds_client.yaml dirfrag/frag_enable.yaml mounts/kmounts.yaml tasks/config-commands.yaml xfs.yaml} 4
fail 1270618 2017-06-08 05:20:43 2017-06-08 18:42:06 2017-06-08 20:36:07 1:54:01 1:16:03 0:37:58 ovh master kcephfs/thrash/{clusters/fixed-3-cephfs.yaml conf.yaml fs/xfs.yaml thrashers/mon.yaml workloads/kclient_workunit_suites_ffsb.yaml} 3
Failure Reason:

"2017-06-08 20:23:08.689960 osd.2 158.69.66.58:6800/8191 1 : cluster [WRN] OSD near full (90%)" in cluster log

pass 1270620 2017-06-08 05:20:43 2017-06-08 18:42:27 2017-06-08 20:36:28 1:54:01 0:41:47 1:12:14 ovh master kcephfs/cephfs/{clusters/fixed-3-cephfs.yaml conf.yaml fs/xfs.yaml inline/yes.yaml tasks/kclient_workunit_suites_fsync.yaml} 3
pass 1270623 2017-06-08 05:20:44 2017-06-08 18:42:52 2017-06-08 20:54:54 2:12:02 1:55:24 0:16:38 ovh master kcephfs/cephfs/{clusters/fixed-3-cephfs.yaml conf.yaml fs/xfs.yaml inline/no.yaml tasks/kclient_workunit_suites_iozone.yaml} 3
pass 1270624 2017-06-08 05:20:45 2017-06-08 18:43:29 2017-06-08 23:13:35 4:30:06 0:48:43 3:41:23 ovh master kcephfs/recovery/{clusters/4-remote-clients.yaml debug/mds_client.yaml dirfrag/frag_enable.yaml mounts/kmounts.yaml tasks/damage.yaml xfs.yaml} 4
pass 1270627 2017-06-08 05:20:45 2017-06-08 18:45:15 2017-06-08 19:41:16 0:56:01 0:39:49 0:16:12 ovh master kcephfs/cephfs/{clusters/fixed-3-cephfs.yaml conf.yaml fs/xfs.yaml inline/yes.yaml tasks/kclient_workunit_suites_pjd.yaml} 3
pass 1270629 2017-06-08 05:20:46 2017-06-08 18:48:12 2017-06-08 19:42:12 0:54:00 0:40:14 0:13:46 ovh master kcephfs/cephfs/{clusters/fixed-3-cephfs.yaml conf.yaml fs/xfs.yaml inline/no.yaml tasks/kclient_workunit_trivial_sync.yaml} 3
pass 1270630 2017-06-08 05:20:46 2017-06-08 18:50:13 2017-06-08 23:08:18 4:18:05 0:52:03 3:26:02 ovh master kcephfs/recovery/{clusters/4-remote-clients.yaml debug/mds_client.yaml dirfrag/frag_enable.yaml mounts/kmounts.yaml tasks/data-scan.yaml xfs.yaml} 4
pass 1270632 2017-06-08 05:20:47 2017-06-08 18:50:13 2017-06-08 20:30:14 1:40:01 0:40:31 0:59:30 ovh master kcephfs/cephfs/{clusters/fixed-3-cephfs.yaml conf.yaml fs/xfs.yaml inline/yes.yaml tasks/kclient_workunit_direct_io.yaml} 3
pass 1270634 2017-06-08 05:20:48 2017-06-08 18:52:32 2017-06-08 23:44:37 4:52:05 1:29:35 3:22:30 ovh master kcephfs/mixed-clients/{clusters/2-clients.yaml conf.yaml fs/xfs.yaml tasks/kernel_cfuse_workunits_untarbuild_blogbench.yaml} 4
fail 1270636 2017-06-08 05:20:48 2017-06-08 18:54:24 2017-06-08 21:38:27 2:44:03 2:16:27 0:27:36 ovh master kcephfs/thrash/{clusters/fixed-3-cephfs.yaml conf.yaml fs/xfs.yaml thrashers/default.yaml workloads/kclient_workunit_suites_iozone.yaml} 3
Failure Reason:

"2017-06-08 21:06:57.392313 osd.3 158.69.66.10:6804/10674 3432 : cluster [WRN] OSD near full (90%)" in cluster log

pass 1270638 2017-06-08 05:20:49 2017-06-08 19:06:09 2017-06-08 20:52:10 1:46:01 1:31:39 0:14:22 ovh master kcephfs/cephfs/{clusters/fixed-3-cephfs.yaml conf.yaml fs/xfs.yaml inline/no.yaml tasks/kclient_workunit_kernel_untar_build.yaml} 3
pass 1270640 2017-06-08 05:20:50 2017-06-08 19:08:31 2017-06-08 22:04:35 2:56:04 0:40:17 2:15:47 ovh master kcephfs/recovery/{clusters/4-remote-clients.yaml debug/mds_client.yaml dirfrag/frag_enable.yaml mounts/kmounts.yaml tasks/forward-scrub.yaml xfs.yaml} 4
pass 1270642 2017-06-08 05:20:50 2017-06-08 19:10:52 2017-06-08 21:08:54 1:58:02 1:19:37 0:38:25 ovh master kcephfs/cephfs/{clusters/fixed-3-cephfs.yaml conf.yaml fs/xfs.yaml inline/yes.yaml tasks/kclient_workunit_misc.yaml} 3
pass 1270644 2017-06-08 05:20:51 2017-06-08 19:12:13 2017-06-08 20:30:14 1:18:01 0:50:57 0:27:04 ovh master kcephfs/cephfs/{clusters/fixed-3-cephfs.yaml conf.yaml fs/xfs.yaml inline/no.yaml tasks/kclient_workunit_o_trunc.yaml} 3
dead 1270646 2017-06-08 05:20:52 2017-06-08 19:29:10 2017-06-09 07:31:45 12:02:35 ovh master kcephfs/recovery/{clusters/4-remote-clients.yaml debug/mds_client.yaml dirfrag/frag_enable.yaml mounts/kmounts.yaml tasks/journal-repair.yaml xfs.yaml} 4
pass 1270649 2017-06-08 05:20:53 2017-06-08 19:38:33 2017-06-08 23:04:44 3:26:11 2:23:26 1:02:45 ovh master kcephfs/cephfs/{clusters/fixed-3-cephfs.yaml conf.yaml fs/xfs.yaml inline/yes.yaml tasks/kclient_workunit_snaps.yaml} 3
fail 1270651 2017-06-08 05:20:53 2017-06-08 19:38:33 2017-06-08 21:10:41 1:32:08 1:11:16 0:20:52 ovh master kcephfs/thrash/{clusters/fixed-3-cephfs.yaml conf.yaml fs/xfs.yaml thrashers/mds.yaml workloads/kclient_workunit_suites_ffsb.yaml} 3
Failure Reason:

"2017-06-08 20:57:38.123374 osd.2 158.69.67.37:6800/8206 1 : cluster [WRN] OSD near full (90%)" in cluster log

pass 1270653 2017-06-08 05:20:54 2017-06-08 19:39:59 2017-06-08 21:08:02 1:28:03 1:08:20 0:19:43 ovh master kcephfs/cephfs/{clusters/fixed-3-cephfs.yaml conf.yaml fs/xfs.yaml inline/no.yaml tasks/kclient_workunit_suites_dbench.yaml} 3
pass 1270655 2017-06-08 05:20:55 2017-06-08 19:41:25 2017-06-08 22:55:29 3:14:04 0:37:59 2:36:05 ovh master kcephfs/recovery/{clusters/4-remote-clients.yaml debug/mds_client.yaml dirfrag/frag_enable.yaml mounts/kmounts.yaml tasks/mds-flush.yaml xfs.yaml} 4
fail 1270657 2017-06-08 05:20:55 2017-06-08 19:42:26 2017-06-08 21:12:43 1:30:17 1:05:35 0:24:42 ovh master kcephfs/cephfs/{clusters/fixed-3-cephfs.yaml conf.yaml fs/xfs.yaml inline/yes.yaml tasks/kclient_workunit_suites_ffsb.yaml} 3
Failure Reason:

Command failed (workunit test suites/ffsb.sh) on ovh049 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=kraken TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/ffsb.sh'

pass 1270659 2017-06-08 05:20:56 2017-06-08 19:47:38 2017-06-08 20:53:39 1:06:01 0:50:32 0:15:29 ovh master kcephfs/cephfs/{clusters/fixed-3-cephfs.yaml conf.yaml fs/xfs.yaml inline/no.yaml tasks/kclient_workunit_suites_fsstress.yaml} 3
pass 1270661 2017-06-08 05:20:56 2017-06-08 19:48:48 2017-06-08 22:02:50 2:14:02 0:38:37 1:35:25 ovh master kcephfs/recovery/{clusters/4-remote-clients.yaml debug/mds_client.yaml dirfrag/frag_enable.yaml mounts/kmounts.yaml tasks/pool-perm.yaml xfs.yaml} 4
fail 1270663 2017-06-08 05:20:57 2017-06-08 19:52:38 2017-06-08 20:30:38 0:38:00 ovh master kcephfs/cephfs/{clusters/fixed-3-cephfs.yaml conf.yaml fs/xfs.yaml inline/yes.yaml tasks/kclient_workunit_suites_fsx.yaml} 3
Failure Reason:

Command failed on ovh043 with status 4: u'rm -f /tmp/linux-image.deb && echo linux-image-4.12.0-rc3-ceph-gf7fd5a7ba1f5_4.12.0-rc3-ceph-gf7fd5a7ba1f5-1_amd64.deb | wget -nv -O /tmp/linux-image.deb --base=https://1.chacra.ceph.com/r/kernel/testing/f7fd5a7ba1f5cc4545b20c138a3094c0841a7b2a/ubuntu/xenial/flavors/default/pool/main/l/linux-4.12.0-rc3-ceph-gf7fd5a7ba1f5/ --input-file=-'

fail 1270665 2017-06-08 05:20:58 2017-06-08 19:58:49 2017-06-08 20:42:49 0:44:00 0:32:50 0:11:10 ovh master kcephfs/thrash/{clusters/fixed-3-cephfs.yaml conf.yaml fs/xfs.yaml thrashers/mon.yaml workloads/kclient_workunit_suites_iozone.yaml} 3
Failure Reason:

{'ovh076.front.sepia.ceph.com': {'_ansible_parsed': True, 'stderr_lines': ["E: Failed to fetch http://nova.clouds.archive.ubuntu.com/ubuntu/pool/main/p/python-defaults/libpython-all-dev_2.7.11-1_amd64.deb Temporary failure resolving 'nova.clouds.archive.ubuntu.com'", '', 'E: Unable to fetch some archives, maybe run apt-get update or try with --fix-missing?'], 'changed': False, '_ansible_no_log': False, 'stdout': "Reading package lists...\nBuilding dependency tree...\nReading state information...\nThe following additional packages will be installed:\n libpython-all-dev python-all python-all-dev python-setuptools python-wheel\nSuggested packages:\n python-setuptools-doc\nThe following NEW packages will be installed:\n libpython-all-dev python-all python-all-dev python-pip python-setuptools\n python-wheel\n0 upgraded, 6 newly installed, 0 to remove and 91 not upgraded.\nNeed to get 364 kB of archives.\nAfter this operation, 1395 kB of additional disk space will be used.\nErr:1 http://nova.clouds.archive.ubuntu.com/ubuntu xenial/main amd64 libpython-all-dev amd64 2.7.11-1\n Temporary failure resolving 'nova.clouds.archive.ubuntu.com'\nGet:2 http://nova.clouds.archive.ubuntu.com/ubuntu xenial/main amd64 python-all amd64 2.7.11-1 [978 B]\nGet:3 http://nova.clouds.archive.ubuntu.com/ubuntu xenial/main amd64 python-all-dev amd64 2.7.11-1 [1000 B]\nGet:4 http://nova.clouds.archive.ubuntu.com/ubuntu xenial-updates/universe amd64 python-pip all 8.1.1-2ubuntu0.4 [144 kB]\nGet:5 http://nova.clouds.archive.ubuntu.com/ubuntu xenial/main amd64 python-setuptools all 20.7.0-1 [169 kB]\nGet:6 http://nova.clouds.archive.ubuntu.com/ubuntu xenial/universe amd64 python-wheel all 0.29.0-1 [48.0 kB]\nFetched 363 kB in 22s (15.9 kB/s)\n", 'cache_updated': False, 'failed': True, 'stderr': "E: Failed to fetch http://nova.clouds.archive.ubuntu.com/ubuntu/pool/main/p/python-defaults/libpython-all-dev_2.7.11-1_amd64.deb Temporary failure resolving 'nova.clouds.archive.ubuntu.com'\n\nE: Unable to fetch some archives, maybe run apt-get update or try with --fix-missing?\n", 'invocation': {'module_args': {'dpkg_options': 'force-confdef,force-confold', 'autoremove': None, 'force': False, 'name': 'python-pip', 'package': ['python-pip'], 'purge': False, 'allow_unauthenticated': False, 'state': 'present', 'upgrade': None, 'update_cache': None, 'deb': None, 'only_upgrade': False, 'cache_valid_time': 0, 'default_release': None, 'install_recommends': None}}, 'msg': '\'/usr/bin/apt-get -y -o "Dpkg::Options::=--force-confdef" -o "Dpkg::Options::=--force-confold" install \'python-pip\'\' failed: E: Failed to fetch http://nova.clouds.archive.ubuntu.com/ubuntu/pool/main/p/python-defaults/libpython-all-dev_2.7.11-1_amd64.deb Temporary failure resolving \'nova.clouds.archive.ubuntu.com\'\n\nE: Unable to fetch some archives, maybe run apt-get update or try with --fix-missing?\n', 'stdout_lines': ['Reading package lists...', 'Building dependency tree...', 'Reading state information...', 'The following additional packages will be installed:', ' libpython-all-dev python-all python-all-dev python-setuptools python-wheel', 'Suggested packages:', ' python-setuptools-doc', 'The following NEW packages will be installed:', ' libpython-all-dev python-all python-all-dev python-pip python-setuptools', ' python-wheel', '0 upgraded, 6 newly installed, 0 to remove and 91 not upgraded.', 'Need to get 364 kB of archives.', 'After this operation, 1395 kB of additional disk space will be used.', 'Err:1 http://nova.clouds.archive.ubuntu.com/ubuntu xenial/main amd64 libpython-all-dev amd64 2.7.11-1', " Temporary failure resolving 'nova.clouds.archive.ubuntu.com'", 'Get:2 http://nova.clouds.archive.ubuntu.com/ubuntu xenial/main amd64 python-all amd64 2.7.11-1 [978 B]', 'Get:3 http://nova.clouds.archive.ubuntu.com/ubuntu xenial/main amd64 python-all-dev amd64 2.7.11-1 [1000 B]', 'Get:4 http://nova.clouds.archive.ubuntu.com/ubuntu xenial-updates/universe amd64 python-pip all 8.1.1-2ubuntu0.4 [144 kB]', 'Get:5 http://nova.clouds.archive.ubuntu.com/ubuntu xenial/main amd64 python-setuptools all 20.7.0-1 [169 kB]', 'Get:6 http://nova.clouds.archive.ubuntu.com/ubuntu xenial/universe amd64 python-wheel all 0.29.0-1 [48.0 kB]', 'Fetched 363 kB in 22s (15.9 kB/s)'], 'cache_update_time': 1496952712}}

pass 1270666 2017-06-08 05:20:58 2017-06-08 20:00:46 2017-06-08 21:14:46 1:14:00 0:41:18 0:32:42 ovh master kcephfs/cephfs/{clusters/fixed-3-cephfs.yaml conf.yaml fs/xfs.yaml inline/no.yaml tasks/kclient_workunit_suites_fsync.yaml} 3
pass 1270668 2017-06-08 05:20:59 2017-06-08 20:12:28 2017-06-08 22:28:30 2:16:02 0:39:01 1:37:01 ovh master kcephfs/recovery/{clusters/4-remote-clients.yaml debug/mds_client.yaml dirfrag/frag_enable.yaml mounts/kmounts.yaml tasks/sessionmap.yaml xfs.yaml} 4
fail 1270670 2017-06-08 05:21:00 2017-06-08 20:20:41 2017-06-08 21:13:02 0:52:21 0:27:36 0:24:45 ovh master kcephfs/cephfs/{clusters/fixed-3-cephfs.yaml conf.yaml fs/xfs.yaml inline/yes.yaml tasks/kclient_workunit_suites_iozone.yaml} 3
Failure Reason:

Command failed on ovh099 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=11.2.0-291-gae0eab5-1xenial ceph-mds=11.2.0-291-gae0eab5-1xenial ceph-mgr=11.2.0-291-gae0eab5-1xenial ceph-common=11.2.0-291-gae0eab5-1xenial ceph-fuse=11.2.0-291-gae0eab5-1xenial ceph-test=11.2.0-291-gae0eab5-1xenial radosgw=11.2.0-291-gae0eab5-1xenial python-ceph=11.2.0-291-gae0eab5-1xenial libcephfs2=11.2.0-291-gae0eab5-1xenial libcephfs-dev=11.2.0-291-gae0eab5-1xenial libcephfs-java=11.2.0-291-gae0eab5-1xenial libcephfs-jni=11.2.0-291-gae0eab5-1xenial librados2=11.2.0-291-gae0eab5-1xenial librbd1=11.2.0-291-gae0eab5-1xenial rbd-fuse=11.2.0-291-gae0eab5-1xenial'

pass 1270672 2017-06-08 05:21:00 2017-06-08 20:22:38 2017-06-08 21:16:38 0:54:00 0:40:27 0:13:33 ovh master kcephfs/cephfs/{clusters/fixed-3-cephfs.yaml conf.yaml fs/xfs.yaml inline/no.yaml tasks/kclient_workunit_suites_pjd.yaml} 3
pass 1270674 2017-06-08 05:21:01 2017-06-08 20:27:11 2017-06-08 21:31:12 1:04:01 0:45:49 0:18:12 ovh master kcephfs/recovery/{clusters/4-remote-clients.yaml debug/mds_client.yaml dirfrag/frag_enable.yaml mounts/kmounts.yaml tasks/volume-client.yaml xfs.yaml} 4
pass 1270676 2017-06-08 05:21:02 2017-06-08 20:30:13 2017-06-08 21:16:14 0:46:01 0:29:34 0:16:27 ovh master kcephfs/cephfs/{clusters/fixed-3-cephfs.yaml conf.yaml fs/xfs.yaml inline/yes.yaml tasks/kclient_workunit_trivial_sync.yaml} 3