Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
fail 1855016 2017-11-16 16:17:37 2017-11-16 16:17:40 2017-11-16 16:49:40 0:32:00 0:15:32 0:16:28 ovh master kcephfs/recovery/{clusters/4-remote-clients.yaml debug/mds_client.yaml dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-comp.yaml tasks/mds-full.yaml whitelist_health.yaml} 4
Failure Reason:

Command failed on ovh017 with status 1: 'sudo cp /var/lib/ceph/osd/ceph-0/fsid /tmp/tmp6z_fCg'

fail 1855017 2017-11-16 16:17:37 2017-11-16 16:17:40 2017-11-16 16:49:40 0:32:00 0:17:49 0:14:11 ovh master kcephfs/recovery/{clusters/4-remote-clients.yaml debug/mds_client.yaml dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore.yaml tasks/sessionmap.yaml whitelist_health.yaml} 4
Failure Reason:

Command failed on ovh041 with status 22: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage /sbin/mount.ceph 158.69.88.124:6789:/ /home/ubuntu/cephtest/mnt.2 -v -o name=2,secretfile=/home/ubuntu/cephtest/ceph.data/client.2.secret,norequire_active_mds'

fail 1855018 2017-11-16 16:17:38 2017-11-16 16:17:40 2017-11-16 16:55:40 0:38:00 0:15:30 0:22:30 ovh master kcephfs/recovery/{clusters/4-remote-clients.yaml debug/mds_client.yaml dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/filestore-xfs.yaml tasks/client-recovery.yaml whitelist_health.yaml} 4
Failure Reason:

Command failed on ovh020 with status 1: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-osd -f --cluster ceph -i 3'

fail 1855019 2017-11-16 16:17:38 2017-11-16 16:17:40 2017-11-16 16:47:40 0:30:00 0:15:17 0:14:43 ovh master kcephfs/recovery/{clusters/4-remote-clients.yaml debug/mds_client.yaml dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-ec-root.yaml tasks/mds-full.yaml whitelist_health.yaml} 4
Failure Reason:

Command failed on ovh031 with status 1: 'sudo cp /var/lib/ceph/osd/ceph-0/fsid /tmp/tmp8JeK90'

fail 1855020 2017-11-16 16:17:39 2017-11-16 16:17:40 2017-11-16 16:47:40 0:30:00 0:18:45 0:11:15 ovh master kcephfs/cephfs/{clusters/fixed-3-cephfs.yaml conf.yaml inline/yes.yaml objectstore-ec/bluestore-ec-root.yaml tasks/kclient_workunit_suites_fsx.yaml} 3
Failure Reason:

Command failed on ovh030 with status 22: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage /sbin/mount.ceph 158.69.87.63:6789,158.69.88.107:6790,158.69.88.107:6789:/ /home/ubuntu/cephtest/mnt.0 -v -o name=0,secretfile=/home/ubuntu/cephtest/ceph.data/client.0.secret,norequire_active_mds'

fail 1855021 2017-11-16 16:17:40 2017-11-16 16:17:41 2017-11-16 16:59:41 0:42:00 0:17:13 0:24:47 ovh master kcephfs/mixed-clients/{clusters/2-clients.yaml conf.yaml objectstore-ec/bluestore-comp-ec-root.yaml tasks/kernel_cfuse_workunits_untarbuild_blogbench.yaml} 4
Failure Reason:

Command failed on ovh016 with status 22: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage /sbin/mount.ceph 158.69.88.199:6789,158.69.88.183:6790,158.69.88.183:6789:/ /home/ubuntu/cephtest/mnt.1 -v -o name=1,secretfile=/home/ubuntu/cephtest/ceph.data/client.1.secret,norequire_active_mds'

fail 1855022 2017-11-16 16:17:40 2017-11-16 16:17:41 2017-11-16 16:47:41 0:30:00 0:16:13 0:13:47 ovh master kcephfs/recovery/{clusters/4-remote-clients.yaml debug/mds_client.yaml dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore.yaml tasks/mds-full.yaml whitelist_health.yaml} 4
Failure Reason:

Command failed on ovh075 with status 1: 'sudo cp /var/lib/ceph/osd/ceph-0/fsid /tmp/tmpvUUk5b'

fail 1855023 2017-11-16 16:17:41 2017-11-16 16:17:42 2017-11-16 16:45:42 0:28:00 0:16:06 0:11:54 ovh master kcephfs/cephfs/{clusters/fixed-3-cephfs.yaml conf.yaml inline/no.yaml objectstore-ec/bluestore-comp.yaml tasks/kclient_workunit_suites_fsx.yaml} 3
Failure Reason:

Command failed on ovh006 with status 22: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage /sbin/mount.ceph 158.69.87.11:6789,158.69.88.138:6790,158.69.88.138:6789:/ /home/ubuntu/cephtest/mnt.0 -v -o name=0,secretfile=/home/ubuntu/cephtest/ceph.data/client.0.secret,norequire_active_mds'

fail 1855024 2017-11-16 16:17:42 2017-11-16 16:17:43 2017-11-16 16:47:42 0:29:59 0:15:11 0:14:48 ovh master kcephfs/recovery/{clusters/4-remote-clients.yaml debug/mds_client.yaml dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/filestore-xfs.yaml tasks/mds-full.yaml whitelist_health.yaml} 4
Failure Reason:

Command failed on ovh054 with status 1: 'sudo cp /var/lib/ceph/osd/ceph-0/fsid /tmp/tmpDP6Sph'

fail 1855025 2017-11-16 16:17:42 2017-11-16 16:17:43 2017-11-16 16:55:43 0:38:00 0:16:00 0:22:00 ovh master kcephfs/recovery/{clusters/4-remote-clients.yaml debug/mds_client.yaml dirfrag/frag_enable.yaml mounts/kmounts.yaml objectstore-ec/bluestore-comp-ec-root.yaml tasks/mds-full.yaml whitelist_health.yaml} 4
Failure Reason:

Command failed on ovh002 with status 1: 'sudo cp /var/lib/ceph/osd/ceph-0/fsid /tmp/tmpDO7nec'